924 resultados para error model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Scale mixtures of the skew-normal (SMSN) distribution is a class of asymmetric thick-tailed distributions that includes the skew-normal (SN) distribution as a special case. The main advantage of these classes of distributions is that they are easy to simulate and have a nice hierarchical representation facilitating easy implementation of the expectation-maximization algorithm for the maximum-likelihood estimation. In this paper, we assume an SMSN distribution for the unobserved value of the covariates and a symmetric scale mixtures of the normal distribution for the error term of the model. This provides a robust alternative to parameter estimation in multivariate measurement error models. Specific distributions examined include univariate and multivariate versions of the SN, skew-t, skew-slash and skew-contaminated normal distributions. The results and methods are applied to a real data set.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this article, we present the EM-algorithm for performing maximum likelihood estimation of an asymmetric linear calibration model with the assumption of skew-normally distributed error. A simulation study is conducted for evaluating the performance of the calibration estimator with interpolation and extrapolation situations. As one application in a real data set, we fitted the model studied in a dimensional measurement method used for calculating the testicular volume through a caliper and its calibration by using ultrasonography as the standard method. By applying this methodology, we do not need to transform the variables to have symmetrical errors. Another interesting aspect of the approach is that the developed transformation to make the information matrix nonsingular, when the skewness parameter is near zero, leaves the parameter of interest unchanged. Model fitting is implemented and the best choice between the usual calibration model and the model proposed in this article was evaluated by developing the Akaike information criterion, Schwarz`s Bayesian information criterion and Hannan-Quinn criterion.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We have considered a Bayesian approach for the nonlinear regression model by replacing the normal distribution on the error term by some skewed distributions, which account for both skewness and heavy tails or skewness alone. The type of data considered in this paper concerns repeated measurements taken in time on a set of individuals. Such multiple observations on the same individual generally produce serially correlated outcomes. Thus, additionally, our model does allow for a correlation between observations made from the same individual. We have illustrated the procedure using a data set to study the growth curves of a clinic measurement of a group of pregnant women from an obstetrics clinic in Santiago, Chile. Parameter estimation and prediction were carried out using appropriate posterior simulation schemes based in Markov Chain Monte Carlo methods. Besides the deviance information criterion (DIC) and the conditional predictive ordinate (CPO), we suggest the use of proper scoring rules based on the posterior predictive distribution for comparing models. For our data set, all these criteria chose the skew-t model as the best model for the errors. These DIC and CPO criteria are also validated, for the model proposed here, through a simulation study. As a conclusion of this study, the DIC criterion is not trustful for this kind of complex model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In many epidemiological studies it is common to resort to regression models relating incidence of a disease and its risk factors. The main goal of this paper is to consider inference on such models with error-prone observations and variances of the measurement errors changing across observations. We suppose that the observations follow a bivariate normal distribution and the measurement errors are normally distributed. Aggregate data allow the estimation of the error variances. Maximum likelihood estimates are computed numerically via the EM algorithm. Consistent estimation of the asymptotic variance of the maximum likelihood estimators is also discussed. Test statistics are proposed for testing hypotheses of interest. Further, we implement a simple graphical device that enables an assessment of the model`s goodness of fit. Results of simulations concerning the properties of the test statistics are reported. The approach is illustrated with data from the WHO MONICA Project on cardiovascular disease. Copyright (C) 2008 John Wiley & Sons, Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Prediction of random effects is an important problem with expanding applications. In the simplest context, the problem corresponds to prediction of the latent value (the mean) of a realized cluster selected via two-stage sampling. Recently, Stanek and Singer [Predicting random effects from finite population clustered samples with response error. J. Amer. Statist. Assoc. 99, 119-130] developed best linear unbiased predictors (BLUP) under a finite population mixed model that outperform BLUPs from mixed models and superpopulation models. Their setup, however, does not allow for unequally sized clusters. To overcome this drawback, we consider an expanded finite population mixed model based on a larger set of random variables that span a higher dimensional space than those typically applied to such problems. We show that BLUPs for linear combinations of the realized cluster means derived under such a model have considerably smaller mean squared error (MSE) than those obtained from mixed models, superpopulation models, and finite population mixed models. We motivate our general approach by an example developed for two-stage cluster sampling and show that it faithfully captures the stochastic aspects of sampling in the problem. We also consider simulation studies to illustrate the increased accuracy of the BLUP obtained under the expanded finite population mixed model. (C) 2007 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective Levodopa in presence of decarboxylase inhibitors is following two-compartment kinetics and its effect is typically modelled using sigmoid Emax models. Pharmacokinetic modelling of the absorption phase of oral distributions is problematic because of irregular gastric emptying. The purpose of this work was to identify and estimate a population pharmacokinetic- pharmacodynamic model for duodenal infusion of levodopa/carbidopa (Duodopa®) that can be used for in numero simulation of treatment strategies. Methods The modelling involved pooling data from two studies and fixing some parameters to values found in literature (Chan et al. J Pharmacokinet Pharmacodyn. 2005 Aug;32(3-4):307-31). The first study involved 12 patients on 3 occasions and is described in Nyholm et al. Clinical Neuropharmacology 2003:26:156-63. The second study, PEDAL, involved 3 patients on 2 occasions. A bolus dose (normal morning dose plus 50%) was given after a washout during night. Plasma samples and motor ratings (clinical assessment of motor function from video recordings on a treatment response scale between -3 and 3, where -3 represents severe parkinsonism and 3 represents severe dyskinesia.) were repeatedly collected until the clinical effect was back at baseline. At this point, the usual infusion rate was started and sampling continued for another two hours. Different structural absorption models and effect models were evaluated using the value of the objective function in the NONMEM package. Population mean parameter values, standard error of estimates (SE) and if possible, interindividual/interoccasion variability (IIV/IOV) were estimated. Results Our results indicate that Duodopa absorption can be modelled with an absorption compartment with an added bioavailability fraction and a lag time. The most successful effect model was of sigmoid Emax type with a steep Hill coefficient and an effect compartment delay. Estimated parameter values are presented in the table. Conclusions The absorption and effect models were reasonably successful in fitting observed data and can be used in simulation experiments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This project constructs a structural model of the United States Economy. This task is tackled in two separate ways: first econometric methods and then using a neural network, both with a structure that mimics the structure of the U.S. economy. The structural model tracks the performance of U.S. GDP rather well in a dynamic simulation, with an average error of just over 1 percent. The neural network performed well, but suffered from some theoretical, as well as some implementation issues.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Short-term Water Information and Forecasting Tools (SWIFT) is a suite of tools for flood and short-term streamflow forecasting, consisting of a collection of hydrologic model components and utilities. Catchments are modeled using conceptual subareas and a node-link structure for channel routing. The tools comprise modules for calibration, model state updating, output error correction, ensemble runs and data assimilation. Given the combinatorial nature of the modelling experiments and the sub-daily time steps typically used for simulations, the volume of model configurations and time series data is substantial and its management is not trivial. SWIFT is currently used mostly for research purposes but has also been used operationally, with intersecting but significantly different requirements. Early versions of SWIFT used mostly ad-hoc text files handled via Fortran code, with limited use of netCDF for time series data. The configuration and data handling modules have since been redesigned. The model configuration now follows a design where the data model is decoupled from the on-disk persistence mechanism. For research purposes the preferred on-disk format is JSON, to leverage numerous software libraries in a variety of languages, while retaining the legacy option of custom tab-separated text formats when it is a preferred access arrangement for the researcher. By decoupling data model and data persistence, it is much easier to interchangeably use for instance relational databases to provide stricter provenance and audit trail capabilities in an operational flood forecasting context. For the time series data, given the volume and required throughput, text based formats are usually inadequate. A schema derived from CF conventions has been designed to efficiently handle time series for SWIFT.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is well known that cointegration between the level of two variables (e.g. prices and dividends) is a necessary condition to assess the empirical validity of a present-value model (PVM) linking them. The work on cointegration,namelyon long-run co-movements, has been so prevalent that it is often over-looked that another necessary condition for the PVM to hold is that the forecast error entailed by the model is orthogonal to the past. This amounts to investigate whether short-run co-movememts steming from common cyclical feature restrictions are also present in such a system. In this paper we test for the presence of such co-movement on long- and short-term interest rates and on price and dividend for the U.S. economy. We focuss on the potential improvement in forecasting accuracies when imposing those two types of restrictions coming from economic theory.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper has two original contributions. First, we show that the present value model (PVM hereafter), which has a wide application in macroeconomics and fi nance, entails common cyclical feature restrictions in the dynamics of the vector error-correction representation (Vahid and Engle, 1993); something that has been already investigated in that VECM context by Johansen and Swensen (1999, 2011) but has not been discussed before with this new emphasis. We also provide the present value reduced rank constraints to be tested within the log-linear model. Our second contribution relates to forecasting time series that are subject to those long and short-run reduced rank restrictions. The reason why appropriate common cyclical feature restrictions might improve forecasting is because it finds natural exclusion restrictions preventing the estimation of useless parameters, which would otherwise contribute to the increase of forecast variance with no expected reduction in bias. We applied the techniques discussed in this paper to data known to be subject to present value restrictions, i.e. the online series maintained and up-dated by Shiller. We focus on three different data sets. The fi rst includes the levels of interest rates with long and short maturities, the second includes the level of real price and dividend for the S&P composite index, and the third includes the logarithmic transformation of prices and dividends. Our exhaustive investigation of several different multivariate models reveals that better forecasts can be achieved when restrictions are applied to them. Moreover, imposing short-run restrictions produce forecast winners 70% of the time for target variables of PVMs and 63.33% of the time when all variables in the system are considered.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study aims to contribute on the forecasting literature in stock return for emerging markets. We use Autometrics to select relevant predictors among macroeconomic, microeconomic and technical variables. We develop predictive models for the Brazilian market premium, measured as the excess return over Selic interest rate, Itaú SA, Itaú-Unibanco and Bradesco stock returns. We nd that for the market premium, an ADL with error correction is able to outperform the benchmarks in terms of economic performance. For individual stock returns, there is a trade o between statistical properties and out-of-sample performance of the model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study aims to contribute on the forecasting literature in stock return for emerging markets. We use Autometrics to select relevant predictors among macroeconomic, microeconomic and technical variables. We develop predictive models for the Brazilian market premium, measured as the excess return over Selic interest rate, Itaú SA, Itaú-Unibanco and Bradesco stock returns. We find that for the market premium, an ADL with error correction is able to outperform the benchmarks in terms of economic performance. For individual stock returns, there is a trade o between statistical properties and out-of-sample performance of the model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction. Leaf area is often related to plant growth, development, physiology and yield. Many non-destructive models have been proposed for leaf area estimation of several plant genotypes, demonstrating that leaf length, leaf width and leaf area are closely correlated. Thus, the objective of our study was to develop a reliable model for leaf area estimation from linear measurements of leaf dimensions for citrus genotypes. Materials and methods. Leaves of citrus genotypes were harvested, and their dimensions (length, width and area) were measured. Values of leaf area were regressed against length, width, the square of length, the square of width and the product (length x width). The most accurate equations, either linear or second-order polynomial, were regressed again with a new data set; then the most reliable equation was defined. Results and discussion. The first analysis showed that the variables length, width and the square of length gave better results in second-order polynomial equations, while the linear equations were more suitable and accurate when the width and the product (length x width) were used. When these equations were regressed with the new data set, the coefficient of determination (R(2)) and the agreement index 'd' were higher for the one that used the variable product (length x width), while the Mean Absolute Percentage Error was lower. Conclusion. The product of the simple leaf dimensions (length x width) can provide a reliable and simple non-destructive model for leaf area estimation across citrus genotypes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ionospheric scintillations are caused by time-varying electron density irregularities in the ionosphere, occurring more often at equatorial and high latitudes. This paper focuses exclusively on experiments undertaken in Europe, at geographic latitudes between similar to 50 degrees N and similar to 80 degrees N, where a network of GPS receivers capable of monitoring Total Electron Content and ionospheric scintillation parameters was deployed. The widely used ionospheric scintillation indices S4 and sigma(phi) represent a practical measure of the intensity of amplitude and phase scintillation affecting GNSS receivers. However, they do not provide sufficient information regarding the actual tracking errors that degrade GNSS receiver performance. Suitable receiver tracking models, sensitive to ionospheric scintillation, allow the computation of the variance of the output error of the receiver PLL (Phase Locked Loop) and DLL (Delay Locked Loop), which expresses the quality of the range measurements used by the receiver to calculate user position. The ability of such models of incorporating phase and amplitude scintillation effects into the variance of these tracking errors underpins our proposed method of applying relative weights to measurements from different satellites. That gives the least squares stochastic model used for position computation a more realistic representation, vis-a-vis the otherwise 'equal weights' model. For pseudorange processing, relative weights were computed, so that a 'scintillation-mitigated' solution could be performed and compared to the (non-mitigated) 'equal weights' solution. An improvement between 17 and 38% in height accuracy was achieved when an epoch by epoch differential solution was computed over baselines ranging from 1 to 750 km. The method was then compared with alternative approaches that can be used to improve the least squares stochastic model such as weighting according to satellite elevation angle and by the inverse of the square of the standard deviation of the code/carrier divergence (sigma CCDiv). The influence of multipath effects on the proposed mitigation approach is also discussed. With the use of high rate scintillation data in addition to the scintillation indices a carrier phase based mitigated solution was also implemented and compared with the conventional solution. During a period of occurrence of high phase scintillation it was observed that problems related to ambiguity resolution can be reduced by the use of the proposed mitigated solution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)