949 resultados para Stochastic covariate
Resumo:
Recent findings demonstrate that trees in deserts are efficient carbon sinks. It remains however unknown whether the Clean Development Mechanism will accelerate the planting of trees in Non Annex I dryland countries. We estimated the price of carbon at which a farmer would be indifferent between his customary activity and the planting of trees to trade carbon credits, along an aridity gradient. Carbon yields were simulated by means of the CO2FIX v3.1 model for Pinus halepensis with its respective yield classes along the gradient (Arid – 100mm to Dry Sub Humid conditions – 900mm). Wheat and pasture yields were predicted on somewhat similar nitrogen-based quadratic models, using 30 years of weather data to simulate moisture stress. Stochastic production, input and output prices were afterwards simulated on a Monte Carlo matrix. Results show that, despite the high levels of carbon uptake, carbon trading by afforesting is unprofitable anywhere along the gradient. Indeed, the price of carbon would have to raise unrealistically high, and the certification costs would have to drop significantly, to make the Clean Development Mechanism worthwhile for non annex I dryland countries farmers. From a government agency's point of view the Clean Development Mechanism is attractive. However, such agencies will find it difficult to demonstrate “additionality”, even if the rule may be somewhat flexible. Based on these findings, we will further discuss why the Clean Development Mechanism, a supposedly pro-poor instrument, fails to assist farmers in Non Annex I dryland countries living at minimum subsistence level.
Resumo:
Stochastic simulation is an important and practical technique for computing probabilities of rare events, like the payoff probability of a financial option, the probability that a queue exceeds a certain level or the probability of ruin of the insurer's risk process. Rare events occur so infrequently, that they cannot be reasonably recorded during a standard simulation procedure: specifc simulation algorithms which thwart the rarity of the event to simulate are required. An important algorithm in this context is based on changing the sampling distribution and it is called importance sampling. Optimal Monte Carlo algorithms for computing rare event probabilities are either logarithmic eficient or possess bounded relative error.
Resumo:
Introduction: According to the ecological view, coordination establishes byvirtueof social context. Affordances thought of as situational opportunities to interact are assumed to represent the guiding principles underlying decisions involved in interpersonal coordination. It’s generally agreed that affordances are not an objective part of the (social) environment but that they depend on the constructive perception of involved subjects. Theory and empirical data hold that cognitive operations enabling domain-specific efficacy beliefs are involved in the perception of affordances. The aim of the present study was to test the effects of these cognitive concepts in the subjective construction of local affordances and their influence on decision making in football. Methods: 71 football players (M = 24.3 years, SD = 3.3, 21 % women) from different divisions participated in the study. Participants were presented scenarios of offensive game situations. They were asked to take the perspective of the person on the ball and to indicate where they would pass the ball from within each situation. The participants stated their decisions in two conditions with different game score (1:0 vs. 0:1). The playing fields of all scenarios were then divided into ten zones. For each zone, participants were asked to rate their confidence in being able to pass the ball there (self-efficacy), the likelihood of the group staying in ball possession if the ball were passed into the zone (group-efficacy I), the likelihood of the ball being covered safely by a team member (pass control / group-efficacy II), and whether a pass would establish a better initial position to attack the opponents’ goal (offensive convenience). Answers were reported on visual analog scales ranging from 1 to 10. Data were analyzed specifying general linear models for binomially distributed data (Mplus). Maximum likelihood with non-normality robust standard errors was chosen to estimate parameters. Results: Analyses showed that zone- and domain-specific efficacy beliefs significantly affected passing decisions. Because of collinearity with self-efficacy and group-efficacy I, group-efficacy II was excluded from the models to ease interpretation of the results. Generally, zones with high values in the subjective ratings had a higher probability to be chosen as passing destination (βself-efficacy = 0.133, p < .001, OR = 1.142; βgroup-efficacy I = 0.128, p < .001, OR = 1.137; βoffensive convenience = 0.057, p < .01, OR = 1.059). There were, however, characteristic differences in the two score conditions. While group-efficacy I was the only significant predictor in condition 1 (βgroup-efficacy I = 0.379, p < .001), only self-efficacy and offensive convenience contributed to passing decisions in condition 2 (βself-efficacy = 0.135, p < .01; βoffensive convenience = 0.120, p < .001). Discussion: The results indicate that subjectively distinct attributes projected to playfield zones affect passing decisions. The study proposes a probabilistic alternative to Lewin’s (1951) hodological and deterministic field theory and enables insight into how dimensions of the psychological landscape afford passing behavior. Being part of a team, this psychological landscape is not only constituted by probabilities that refer to the potential and consequences of individual behavior, but also to that of the group system of which individuals are part of. Hence, in regulating action decisions in group settings, informers are extended to aspects referring to the group-level. References: Lewin, K. (1951). In D. Cartwright (Ed.), Field theory in social sciences: Selected theoretical papers by Kurt Lewin. New York: Harper & Brothers.
Resumo:
The application of Markov processes is very useful to health-care problems. The objective of this study is to provide a structured methodology of forecasting cost based upon combining a stochastic model of utilization (Markov Chain) and deterministic cost function. The perspective of the cost in this study is the reimbursement for the services rendered. The data to be used is the OneCare database of claim records of their enrollees over a two-year period of January 1, 1996–December 31, 1997. The model combines a Markov Chain that describes the utilization pattern and its variability where the use of resources by risk groups (age, gender, and diagnosis) will be considered in the process and a cost function determined from a fixed schedule based on real costs or charges for those in the OneCare claims database. The cost function is a secondary application to the model. Goodness-of-fit will be used checked for the model against the traditional method of cost forecasting. ^
Resumo:
Based on an order-theoretic approach, we derive sufficient conditions for the existence, characterization, and computation of Markovian equilibrium decision processes and stationary Markov equilibrium on minimal state spaces for a large class of stochastic overlapping generations models. In contrast to all previous work, we consider reduced-form stochastic production technologies that allow for a broad set of equilibrium distortions such as public policy distortions, social security, monetary equilibrium, and production nonconvexities. Our order-based methods are constructive, and we provide monotone iterative algorithms for computing extremal stationary Markov equilibrium decision processes and equilibrium invariant distributions, while avoiding many of the problems associated with the existence of indeterminacies that have been well-documented in previous work. We provide important results for existence of Markov equilibria for the case where capital income is not increasing in the aggregate stock. Finally, we conclude with examples common in macroeconomics such as models with fiat money and social security. We also show how some of our results extend to settings with unbounded state spaces.
Resumo:
This paper provides new sufficient conditions for the existence, computation via successive approximations, and stability of Markovian equilibrium decision processes for a large class of OLG models with stochastic nonclassical production. Our notion of stability is existence of stationary Markovian equilibrium. With a nonclassical production, our economies encompass a large class of OLG models with public policy, valued fiat money, production externalities, and Markov shocks to production. Our approach combines aspects of both topological and order theoretic fixed point theory, and provides the basis of globally stable numerical iteration procedures for computing extremal Markovian equilibrium objects. In addition to new theoretical results on existence and computation, we provide some monotone comparative statics results on the space of economies.
Resumo:
This paper extends the existing research on real estate investment trust (REIT) operating efficiencies. We estimate a stochastic-frontier panel-data model specifying a translog cost function, covering 1995 to 2003. The results disagree with previous research in that we find little evidence of scale economies and some evidence of scale diseconomies. Moreover, we also generally find smaller inefficiencies than those shown by other REIT studies. Contrary to previous research, the results also show that self-management of a REIT associates with more inefficiency when we measure output with assets. When we use revenue to measure output, selfmanagement associates with less inefficiency. Also contrary with previous research, higher leverage associates with more efficiency. The results further suggest that inefficiency increases over time in three of our four specifications.
Resumo:
In this paper we introduce technical efficiency via the intercept that evolve over time as a AR(1) process in a stochastic frontier (SF) framework in a panel data framework. Following are the distinguishing features of the model. First, the model is dynamic in nature. Second, it can separate technical inefficiency from fixed firm-specific effects which are not part of inefficiency. Third, the model allows one to estimate technical change separate from change in technical efficiency. We propose the ML method to estimate the parameters of the model. Finally, we derive expressions to calculate/predict technical inefficiency (efficiency).
Resumo:
Ordinal outcomes are frequently employed in diagnosis and clinical trials. Clinical trials of Alzheimer's disease (AD) treatments are a case in point using the status of mild, moderate or severe disease as outcome measures. As in many other outcome oriented studies, the disease status may be misclassified. This study estimates the extent of misclassification in an ordinal outcome such as disease status. Also, this study estimates the extent of misclassification of a predictor variable such as genotype status. An ordinal logistic regression model is commonly used to model the relationship between disease status, the effect of treatment, and other predictive factors. A simulation study was done. First, data based on a set of hypothetical parameters and hypothetical rates of misclassification was created. Next, the maximum likelihood method was employed to generate likelihood equations accounting for misclassification. The Nelder-Mead Simplex method was used to solve for the misclassification and model parameters. Finally, this method was applied to an AD dataset to detect the amount of misclassification present. The estimates of the ordinal regression model parameters were close to the hypothetical parameters. β1 was hypothesized at 0.50 and the mean estimate was 0.488, β2 was hypothesized at 0.04 and the mean of the estimates was 0.04. Although the estimates for the rates of misclassification of X1 were not as close as β1 and β2, they validate this method. X 1 0-1 misclassification was hypothesized as 2.98% and the mean of the simulated estimates was 1.54% and, in the best case, the misclassification of k from high to medium was hypothesized at 4.87% and had a sample mean of 3.62%. In the AD dataset, the estimate for the odds ratio of X 1 of having both copies of the APOE 4 allele changed from an estimate of 1.377 to an estimate 1.418, demonstrating that the estimates of the odds ratio changed when the analysis includes adjustment for misclassification. ^
Resumo:
This dissertation develops and explores the methodology for the use of cubic spline functions in assessing time-by-covariate interactions in Cox proportional hazards regression models. These interactions indicate violations of the proportional hazards assumption of the Cox model. Use of cubic spline functions allows for the investigation of the shape of a possible covariate time-dependence without having to specify a particular functional form. Cubic spline functions yield both a graphical method and a formal test for the proportional hazards assumption as well as a test of the nonlinearity of the time-by-covariate interaction. Five existing methods for assessing violations of the proportional hazards assumption are reviewed and applied along with cubic splines to three well known two-sample datasets. An additional dataset with three covariates is used to explore the use of cubic spline functions in a more general setting. ^
Resumo:
The problem of analyzing data with updated measurements in the time-dependent proportional hazards model arises frequently in practice. One available option is to reduce the number of intervals (or updated measurements) to be included in the Cox regression model. We empirically investigated the bias of the estimator of the time-dependent covariate while varying the effect of failure rate, sample size, true values of the parameters and the number of intervals. We also evaluated how often a time-dependent covariate needs to be collected and assessed the effect of sample size and failure rate on the power of testing a time-dependent effect.^ A time-dependent proportional hazards model with two binary covariates was considered. The time axis was partitioned into k intervals. The baseline hazard was assumed to be 1 so that the failure times were exponentially distributed in the ith interval. A type II censoring model was adopted to characterize the failure rate. The factors of interest were sample size (500, 1000), type II censoring with failure rates of 0.05, 0.10, and 0.20, and three values for each of the non-time-dependent and time-dependent covariates (1/4,1/2,3/4).^ The mean of the bias of the estimator of the coefficient of the time-dependent covariate decreased as sample size and number of intervals increased whereas the mean of the bias increased as failure rate and true values of the covariates increased. The mean of the bias of the estimator of the coefficient was smallest when all of the updated measurements were used in the model compared with two models that used selected measurements of the time-dependent covariate. For the model that included all the measurements, the coverage rates of the estimator of the coefficient of the time-dependent covariate was in most cases 90% or more except when the failure rate was high (0.20). The power associated with testing a time-dependent effect was highest when all of the measurements of the time-dependent covariate were used. An example from the Systolic Hypertension in the Elderly Program Cooperative Research Group is presented. ^
Resumo:
Illumination uniformity of a spherical capsule directly driven by laser beams has been assessed numerically. Laser facilities characterized by ND = 12, 20, 24, 32, 48 and 60 directions of irradiation with associated a single laser beam or a bundle of NB laser beams have been considered. The laser beam intensity profile is assumed super-Gaussian and the calculations take into account beam imperfections as power imbalance and pointing errors. The optimum laser intensity profile, which minimizes the root-mean-square deviation of the capsule illumination, depends on the values of the beam imperfections. Assuming that the NB beams are statistically independents is found that they provide a stochastic homogenization of the laser intensity associated to the whole bundle, reducing the errors associated to the whole bundle by the factor , which in turn improves the illumination uniformity of the capsule. Moreover, it is found that the uniformity of the irradiation is almost the same for all facilities and only depends on the total number of laser beams Ntot = ND × NB.
Resumo:
Nanofabrication has allowed the development of new concepts such as magnetic logic and race-track memory, both of which are based on the displacement of magnetic domain walls on magnetic nanostripes. One of the issues that has to be solved before devices can meet the market demands is the stochastic behaviour of the domain wall movement in magnetic nanostripes. Here we show that the stochastic nature of the domain wall motion in permalloy nanostripes can be suppressed at very low fields (0.6-2.7 Oe). We also find different field regimes for this stochastic motion that match well with the domain wall propagation modes. The highest pinning probability is found around the precessional mode and, interestingly, it does not depend on the external field in this regime. These results constitute an experimental evidence of the intrinsic nature of the stochastic pinning of domain walls in soft magnetic nanostripes