980 resultados para Stochastic Frontier Models
Resumo:
A new frontier in weather forecasting is emerging by operational forecast models now being run at convection-permitting resolutions at many national weather services. However, this is not a panacea; significant systematic errors remain in the character of convective storms and rainfall distributions. The DYMECS project (Dynamical and Microphysical Evolution of Convective Storms) is taking a fundamentally new approach to evaluate and improve such models: rather than relying on a limited number of cases, which may not be representative, we have gathered a large database of 3D storm structures on 40 convective days using the Chilbolton radar in southern England. We have related these structures to storm life-cycles derived by tracking features in the rainfall from the UK radar network, and compared them statistically to storm structures in the Met Office model, which we ran at horizontal grid length between 1.5 km and 100 m, including simulations with different subgrid mixing length. We also evaluated the scale and intensity of convective updrafts using a new radar technique. We find that the horizontal size of simulated convective storms and the updrafts within them is much too large at 1.5-km resolution, such that the convective mass flux of individual updrafts can be too large by an order of magnitude. The scale of precipitation cores and updrafts decreases steadily with decreasing grid lengths, as does the typical storm lifetime. The 200-m grid-length simulation with standard mixing length performs best over all diagnostics, although a greater mixing length improves the representation of deep convective storms.
Resumo:
Individual-based models (IBMs) can simulate the actions of individual animals as they interact with one another and the landscape in which they live. When used in spatially-explicit landscapes IBMs can show how populations change over time in response to management actions. For instance, IBMs are being used to design strategies of conservation and of the exploitation of fisheries, and for assessing the effects on populations of major construction projects and of novel agricultural chemicals. In such real world contexts, it becomes especially important to build IBMs in a principled fashion, and to approach calibration and evaluation systematically. We argue that insights from physiological and behavioural ecology offer a recipe for building realistic models, and that Approximate Bayesian Computation (ABC) is a promising technique for the calibration and evaluation of IBMs. IBMs are constructed primarily from knowledge about individuals. In ecological applications the relevant knowledge is found in physiological and behavioural ecology, and we approach these from an evolutionary perspective by taking into account how physiological and behavioural processes contribute to life histories, and how those life histories evolve. Evolutionary life history theory shows that, other things being equal, organisms should grow to sexual maturity as fast as possible, and then reproduce as fast as possible, while minimising per capita death rate. Physiological and behavioural ecology are largely built on these principles together with the laws of conservation of matter and energy. To complete construction of an IBM information is also needed on the effects of competitors, conspecifics and food scarcity; the maximum rates of ingestion, growth and reproduction, and life-history parameters. Using this knowledge about physiological and behavioural processes provides a principled way to build IBMs, but model parameters vary between species and are often difficult to measure. A common solution is to manually compare model outputs with observations from real landscapes and so to obtain parameters which produce acceptable fits of model to data. However, this procedure can be convoluted and lead to over-calibrated and thus inflexible models. Many formal statistical techniques are unsuitable for use with IBMs, but we argue that ABC offers a potential way forward. It can be used to calibrate and compare complex stochastic models and to assess the uncertainty in their predictions. We describe methods used to implement ABC in an accessible way and illustrate them with examples and discussion of recent studies. Although much progress has been made, theoretical issues remain, and some of these are outlined and discussed.
Resumo:
Deforestation in Brazilian Amazonia accounts for a disproportionate global scale fraction of both carbon emissions from biomass burning and biodiversity erosion through habitat loss. Here we use field- and remote-sensing data to examine the effects of private landholding size on the amount and type of forest cover retained within economically active rural properties in an aging southern Amazonian deforestation frontier. Data on both upland and riparian forest cover from a survey of 300 rural properties indicated that 49.4% (SD = 29.0%) of the total forest cover was maintained as of 2007. and that property size is a key regional-scale determinant of patterns of deforestation and land-use change. Small properties (<= 150 ha) retained a lower proportion of forest (20.7%, SD = 17.6) than did large properties (>150 ha; 55.6%, SD = 27.2). Generalized linear models showed that property size had a positive effect on remaining areas of both upland and total forest cover. Using a Landsat time-series, the age of first clear-cutting that could be mapped within the boundaries of each property had a negative effect on the proportion of upland, riparian, and total forest cover retained. Based on these data, we show contrasts in land-use strategies between smallholders and largeholders, as well as differences in compliance with legal requirements in relation to minimum forest cover set-asides within private landholdings. This suggests that property size structure must be explicitly considered in landscape-scale conservation planning initiatives guiding agro-pastoral frontier expansion into remaining areas of tropical forest. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
Little follow-up data on malaria transmission in communities originating from frontier settlements in Amazonia are available. Here we describe a cohort study in a frontier settlement in Acre, Brazil, where 509 subjects contributed 489.7 person-years of follow-up. The association between malaria morbidity during the follow-up and individual, household, and spatial covariates was explored with mixed-effects logistic regression models and spatial analysis. Incidence rates for Plasmodium vivax and Plasmodium falciparum malaria were 30.0/100 and 16.3/100 person-years at risk, respectively. Malaria morbidity was strongly associated with land clearing and farming, and decreased after five years of residence in the area, suggesting that clinical immunity develops among subjects exposed to low malaria endemicity. Significant spatial clustering of malaria was observed in the areas of most recent occupation, indicating that the continuous influx of nonimmune settlers to forest-fringe areas perpetuates the cycle of environmental change and colonization that favors malaria transmission in rural Amazonia.
Resumo:
Two stochastic epidemic lattice models, the susceptible-infected-recovered and the susceptible-exposed-infected models, are studied on a Cayley tree of coordination number k. The spreading of the disease in the former is found to occur when the infection probability b is larger than b(c) = k/2(k - 1). In the latter, which is equivalent to a dynamic site percolation model, the spreading occurs when the infection probability p is greater than p(c) = 1/(k - 1). We set up and solve the time evolution equations for both models and determine the final and time-dependent properties, including the epidemic curve. We show that the two models are closely related by revealing that their relevant properties are exactly mapped into each other when p = b/[k - (k - 1) b]. These include the cluster size distribution and the density of individuals of each type, quantities that have been determined in closed forms.
Resumo:
We study a stochastic process describing the onset of spreading dynamics of an epidemic in a population composed of individuals of three classes: susceptible (S), infected (I), and recovered (R). The stochastic process is defined by local rules and involves the following cyclic process: S -> I -> R -> S (SIRS). The open process S -> I -> R (SIR) is studied as a particular case of the SIRS process. The epidemic process is analyzed at different levels of description: by a stochastic lattice gas model and by a birth and death process. By means of Monte Carlo simulations and dynamical mean-field approximations we show that the SIRS stochastic lattice gas model exhibit a line of critical points separating the two phases: an absorbing phase where the lattice is completely full of S individuals and an active phase where S, I and R individuals coexist, which may or may not present population cycles. The critical line, that corresponds to the onset of epidemic spreading, is shown to belong in the directed percolation universality class. By considering the birth and death process we analyze the role of noise in stabilizing the oscillations. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
We have studied by numerical simulations the relaxation of the stochastic seven-state Potts model after a quench from a high temperature down to a temperature below the first-order transition. For quench temperatures just below the transition temperature the phase ordering occurs by simple coarsening under the action of surface tension. For sufficient low temperatures however the straightening of the interface between domains drives the system toward a metastable disordered state, identified as a glassy state. Escaping from this state occurs, if the quench temperature is nonzero, by a thermal activated dynamics that eventually drives the system toward the equilibrium state. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
The coexistence between different types of templates has been the choice solution to the information crisis of prebiotic evolution, triggered by the finding that a single RNA-like template cannot carry enough information to code for any useful replicase. In principle, confining d distinct templates of length L in a package or protocell, whose Survival depends on the coexistence of the templates it holds in, could resolve this crisis provided that d is made sufficiently large. Here we review the prototypical package model of Niesert et al. [1981. Origin of life between Scylla and Charybdis. J. Mol. Evol. 17, 348-353] which guarantees the greatest possible region of viability of the protocell population, and show that this model, and hence the entire package approach, does not resolve the information crisis. In particular, we show that the total information stored in a viable protocell (Ld) tends to a constant value that depends only on the spontaneous error rate per nucleotide of the template replication mechanism. As a result, an increase of d must be followed by a decrease of L, so that the net information gain is null. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
We analyze the stability properties of equilibrium solutions and periodicity of orbits in a two-dimensional dynamical system whose orbits mimic the evolution of the price of an asset and the excess demand for that asset. The construction of the system is grounded upon a heterogeneous interacting agent model for a single risky asset market. An advantage of this construction procedure is that the resulting dynamical system becomes a macroscopic market model which mirrors the market quantities and qualities that would typically be taken into account solely at the microscopic level of modeling. The system`s parameters correspond to: (a) the proportion of speculators in a market; (b) the traders` speculative trend; (c) the degree of heterogeneity of idiosyncratic evaluations of the market agents with respect to the asset`s fundamental value; and (d) the strength of the feedback of the population excess demand on the asset price update increment. This correspondence allows us to employ our results in order to infer plausible causes for the emergence of price and demand fluctuations in a real asset market. The employment of dynamical systems for studying evolution of stochastic models of socio-economic phenomena is quite usual in the area of heterogeneous interacting agent models. However, in the vast majority of the cases present in the literature, these dynamical systems are one-dimensional. Our work is among the few in the area that construct and study analytically a two-dimensional dynamical system and apply it for explanation of socio-economic phenomena.
Resumo:
Mathematical models, as instruments for understanding the workings of nature, are a traditional tool of physics, but they also play an ever increasing role in biology - in the description of fundamental processes as well as that of complex systems. In this review, the authors discuss two examples of the application of group theoretical methods, which constitute the mathematical discipline for a quantitative description of the idea of symmetry, to genetics. The first one appears, in the form of a pseudo-orthogonal (Lorentz like) symmetry, in the stochastic modelling of what may be regarded as the simplest possible example of a genetic network and, hopefully, a building block for more complicated ones: a single self-interacting or externally regulated gene with only two possible states: ` on` and ` off`. The second is the algebraic approach to the evolution of the genetic code, according to which the current code results from a dynamical symmetry breaking process, starting out from an initial state of complete symmetry and ending in the presently observed final state of low symmetry. In both cases, symmetry plays a decisive role: in the first, it is a characteristic feature of the dynamics of the gene switch and its decay to equilibrium, whereas in the second, it provides the guidelines for the evolution of the coding rules.
Resumo:
This paper presents the techniques of likelihood prediction for the generalized linear mixed models. Methods of likelihood prediction is explained through a series of examples; from a classical one to more complicated ones. The examples show, in simple cases, that the likelihood prediction (LP) coincides with already known best frequentist practice such as the best linear unbiased predictor. The paper outlines a way to deal with the covariate uncertainty while producing predictive inference. Using a Poisson error-in-variable generalized linear model, it has been shown that in complicated cases LP produces better results than already know methods.
Resumo:
This paper considers the general problem of Feasible Generalized Least Squares Instrumental Variables (FG LS IV) estimation using optimal instruments. First we summarize the sufficient conditions for the FG LS IV estimator to be asymptotic ally equivalent to an optimal G LS IV estimator. Then we specialize to stationary dynamic systems with stationary VAR errors, and use the sufficient conditions to derive new moment conditions for these models. These moment conditions produce useful IVs from the lagged endogenous variables, despite the correlation between errors and endogenous variables. This use of the information contained in the lagged endogenous variables expands the class of IV estimators under consideration and there by potentially improves both asymptotic and small-sample efficiency of the optimal IV estimator in the class. Some Monte Carlo experiments compare the new methods with those of Hatanaka [1976]. For the DG P used in the Monte Carlo experiments, asymptotic efficiency is strictly improved by the new IVs, and experimental small-sample efficiency is improved as well.
Resumo:
Asset allocation decisions and value at risk calculations rely strongly on volatility estimates. Volatility measures such as rolling window, EWMA, GARCH and stochastic volatility are used in practice. GARCH and EWMA type models that incorporate the dynamic structure of volatility and are capable of forecasting future behavior of risk should perform better than constant, rolling window volatility models. For the same asset the model that is the ‘best’ according to some criterion can change from period to period. We use the reality check test∗ to verify if one model out-performs others over a class of re-sampled time-series data. The test is based on re-sampling the data using stationary bootstrapping. For each re-sample we check the ‘best’ model according to two criteria and analyze the distribution of the performance statistics. We compare constant volatility, EWMA and GARCH models using a quadratic utility function and a risk management measurement as comparison criteria. No model consistently out-performs the benchmark.
Resumo:
We aim to provide a review of the stochastic discount factor bounds usually applied to diagnose asset pricing models. In particular, we mainly discuss the bounds used to analyze the disaster model of Barro (2006). Our attention is focused in this disaster model since the stochastic discount factor bounds that are applied to study the performance of disaster models usually consider the approach of Barro (2006). We first present the entropy bounds that provide a diagnosis of the analyzed disaster model which are the methods of Almeida and Garcia (2012, 2016); Ghosh et al. (2016). Then, we discuss how their results according to the disaster model are related to each other and also present the findings of other methodologies that are similar to these bounds but provide different evidence about the performance of the framework developed by Barro (2006).
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)