980 resultados para MAXIMUM PENALIZED LIKELIHOOD ESTIMATES


Relevância:

30.00% 30.00%

Publicador:

Resumo:

We obtain adjustments to the profile likelihood function in Weibull regression models with and without censoring. Specifically, we consider two different modified profile likelihoods: (i) the one proposed by Cox and Reid [Cox, D.R. and Reid, N., 1987, Parameter orthogonality and approximate conditional inference. Journal of the Royal Statistical Society B, 49, 1-39.], and (ii) an approximation to the one proposed by Barndorff-Nielsen [Barndorff-Nielsen, O.E., 1983, On a formula for the distribution of the maximum likelihood estimator. Biometrika, 70, 343-365.], the approximation having been obtained using the results by Fraser and Reid [Fraser, D.A.S. and Reid, N., 1995, Ancillaries and third-order significance. Utilitas Mathematica, 47, 33-53.] and by Fraser et al. [Fraser, D.A.S., Reid, N. and Wu, J., 1999, A simple formula for tail probabilities for frequentist and Bayesian inference. Biometrika, 86, 655-661.]. We focus on point estimation and likelihood ratio tests on the shape parameter in the class of Weibull regression models. We derive some distributional properties of the different maximum likelihood estimators and likelihood ratio tests. The numerical evidence presented in the paper favors the approximation to Barndorff-Nielsen`s adjustment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this article, we discuss inferential aspects of the measurement error regression models with null intercepts when the unknown quantity x (latent variable) follows a skew normal distribution. We examine first the maximum-likelihood approach to estimation via the EM algorithm by exploring statistical properties of the model considered. Then, the marginal likelihood, the score function and the observed information matrix of the observed quantities are presented allowing direct inference implementation. In order to discuss some diagnostics techniques in this type of models, we derive the appropriate matrices to assessing the local influence on the parameter estimates under different perturbation schemes. The results and methods developed in this paper are illustrated considering part of a real data set used by Hadgu and Koch [1999, Application of generalized estimating equations to a dental randomized clinical trial. Journal of Biopharmaceutical Statistics, 9, 161-178].

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Birnbaum-Saunders regression model is becoming increasingly popular in lifetime analyses and reliability studies. In this model, the signed likelihood ratio statistic provides the basis for testing inference and construction of confidence limits for a single parameter of interest. We focus on the small sample case, where the standard normal distribution gives a poor approximation to the true distribution of the statistic. We derive three adjusted signed likelihood ratio statistics that lead to very accurate inference even for very small samples. Two empirical applications are presented. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When missing data occur in studies designed to compare the accuracy of diagnostic tests, a common, though naive, practice is to base the comparison of sensitivity, specificity, as well as of positive and negative predictive values on some subset of the data that fits into methods implemented in standard statistical packages. Such methods are usually valid only under the strong missing completely at random (MCAR) assumption and may generate biased and less precise estimates. We review some models that use the dependence structure of the completely observed cases to incorporate the information of the partially categorized observations into the analysis and show how they may be fitted via a two-stage hybrid process involving maximum likelihood in the first stage and weighted least squares in the second. We indicate how computational subroutines written in R may be used to fit the proposed models and illustrate the different analysis strategies with observational data collected to compare the accuracy of three distinct non-invasive diagnostic methods for endometriosis. The results indicate that even when the MCAR assumption is plausible, the naive partial analyses should be avoided.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Item response theory (IRT) comprises a set of statistical models which are useful in many fields, especially when there is interest in studying latent variables. These latent variables are directly considered in the Item Response Models (IRM) and they are usually called latent traits. A usual assumption for parameter estimation of the IRM, considering one group of examinees, is to assume that the latent traits are random variables which follow a standard normal distribution. However, many works suggest that this assumption does not apply in many cases. Furthermore, when this assumption does not hold, the parameter estimates tend to be biased and misleading inference can be obtained. Therefore, it is important to model the distribution of the latent traits properly. In this paper we present an alternative latent traits modeling based on the so-called skew-normal distribution; see Genton (2004). We used the centred parameterization, which was proposed by Azzalini (1985). This approach ensures the model identifiability as pointed out by Azevedo et al. (2009b). Also, a Metropolis Hastings within Gibbs sampling (MHWGS) algorithm was built for parameter estimation by using an augmented data approach. A simulation study was performed in order to assess the parameter recovery in the proposed model and the estimation method, and the effect of the asymmetry level of the latent traits distribution on the parameter estimation. Also, a comparison of our approach with other estimation methods (which consider the assumption of symmetric normality for the latent traits distribution) was considered. The results indicated that our proposed algorithm recovers properly all parameters. Specifically, the greater the asymmetry level, the better the performance of our approach compared with other approaches, mainly in the presence of small sample sizes (number of examinees). Furthermore, we analyzed a real data set which presents indication of asymmetry concerning the latent traits distribution. The results obtained by using our approach confirmed the presence of strong negative asymmetry of the latent traits distribution. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We review some issues related to the implications of different missing data mechanisms on statistical inference for contingency tables and consider simulation studies to compare the results obtained under such models to those where the units with missing data are disregarded. We confirm that although, in general, analyses under the correct missing at random and missing completely at random models are more efficient even for small sample sizes, there are exceptions where they may not improve the results obtained by ignoring the partially classified data. We show that under the missing not at random (MNAR) model, estimates on the boundary of the parameter space as well as lack of identifiability of the parameters of saturated models may be associated with undesirable asymptotic properties of maximum likelihood estimators and likelihood ratio tests; even in standard cases the bias of the estimators may be low only for very large samples. We also show that the probability of a boundary solution obtained under the correct MNAR model may be large even for large samples and that, consequently, we may not always conclude that a MNAR model is misspecified because the estimate is on the boundary of the parameter space.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we discuss inferential aspects for the Grubbs model when the unknown quantity x (latent response) follows a skew-normal distribution, extending early results given in Arellano-Valle et al. (J Multivar Anal 96:265-281, 2005b). Maximum likelihood parameter estimates are computed via the EM-algorithm. Wald and likelihood ratio type statistics are used for hypothesis testing and we explain the apparent failure of the Wald statistics in detecting skewness via the profile likelihood function. The results and methods developed in this paper are illustrated with a numerical example.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider the issue of performing residual and local influence analyses in beta regression models with varying dispersion, which are useful for modelling random variables that assume values in the standard unit interval. In such models, both the mean and the dispersion depend upon independent variables. We derive the appropriate matrices for assessing local influence on the parameter estimates under different perturbation schemes. An application using real data is presented and discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Although the asymptotic distributions of the likelihood ratio for testing hypotheses of null variance components in linear mixed models derived by Stram and Lee [1994. Variance components testing in longitudinal mixed effects model. Biometrics 50, 1171-1177] are valid, their proof is based on the work of Self and Liang [1987. Asymptotic properties of maximum likelihood estimators and likelihood tests under nonstandard conditions. J. Amer. Statist. Assoc. 82, 605-610] which requires identically distributed random variables, an assumption not always valid in longitudinal data problems. We use the less restrictive results of Vu and Zhou [1997. Generalization of likelihood ratio tests under nonstandard conditions. Ann. Statist. 25, 897-916] to prove that the proposed mixture of chi-squared distributions is the actual asymptotic distribution of such likelihood ratios used as test statistics for null variance components in models with one or two random effects. We also consider a limited simulation study to evaluate the appropriateness of the asymptotic distribution of such likelihood ratios in moderately sized samples. (C) 2008 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Grubbs` measurement model is frequently used to compare several measuring devices. It is common to assume that the random terms have a normal distribution. However, such assumption makes the inference vulnerable to outlying observations, whereas scale mixtures of normal distributions have been an interesting alternative to produce robust estimates, keeping the elegancy and simplicity of the maximum likelihood theory. The aim of this paper is to develop an EM-type algorithm for the parameter estimation, and to use the local influence method to assess the robustness aspects of these parameter estimates under some usual perturbation schemes, In order to identify outliers and to criticize the model building we use the local influence procedure in a Study to compare the precision of several thermocouples. (C) 2008 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this article is to discuss the estimation of the systematic risk in capital asset pricing models with heavy-tailed error distributions to explain the asset returns. Diagnostic methods for assessing departures from the model assumptions as well as the influence of observations on the parameter estimates are also presented. It may be shown that outlying observations are down weighted in the maximum likelihood equations of linear models with heavy-tailed error distributions, such as Student-t, power exponential, logistic II, so on. This robustness aspect may also be extended to influential observations. An application in which the systematic risk estimate of Microsoft is compared under normal and heavy-tailed errors is presented for illustration.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We give estimates of the intrinsic and the extrinsic curvature of manifolds that are isometrically immersed as cylindrically bounded submanifolds of warped products. We also address extensions of the results in the case of submanifolds of the total space of a Riemannian submersion.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider methods for estimating causal effects of treatment in the situation where the individuals in the treatment and the control group are self selected, i.e., the selection mechanism is not randomized. In this case, simple comparison of treated and control outcomes will not generally yield valid estimates of casual effects. The propensity score method is frequently used for the evaluation of treatment effect. However, this method is based onsome strong assumptions, which are not directly testable. In this paper, we present an alternative modeling approachto draw causal inference by using share random-effect model and the computational algorithm to draw likelihood based inference with such a model. With small numerical studies and a real data analysis, we show that our approach gives not only more efficient estimates but it is also less sensitive to model misspecifications, which we consider, than the existing methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Este trabalho examinou as características de carteiras compostas por ações e otimizadas segundo o critério de média-variância e formadas através de estimativas robustas de risco e retorno. A motivação para isto é a distribuição típica de ativos financeiros (que apresenta outliers e mais curtose que a distribuição normal). Para comparação entre as carteiras, foram consideradas suas propriedades: estabilidade, variabilidade e os índices de Sharpe obtidos pelas mesmas. O resultado geral mostra que estas carteiras obtidas através de estimativas robustas de risco e retorno apresentam melhoras em sua estabilidade e variabilidade, no entanto, esta melhora é insuficiente para diferenciar os índices de Sharpe alcançados pelas mesmas das carteiras obtidas através de método de máxima verossimilhança para estimativas de risco e retorno.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Local provision of public services has the positive effect of increasing the efficiency because each locality has its idiosyncrasies that determine a particular demand for public services. This dissertation addresses different aspects of the local demand for public goods and services and their relationship with political incentives. The text is divided in three essays. The first essay aims to test the existence of yardstick competition in education spending using panel data from Brazilian municipalities. The essay estimates two-regime spatial Durbin models with time and spatial fixed effects using maximum likelihood, where the regimes represent different electoral and educational accountability institutional settings. First, it is investigated whether the lame duck incumbents tend to engage in less strategic interaction as a result of the impossibility of reelection, which lowers the incentives for them to signal their type (good or bad) to the voters by mimicking their neighbors’ expenditures. Additionally, it is evaluated whether the lack of electorate support faced by the minority governments causes the incumbents to mimic the neighbors’ spending to a greater extent to increase their odds of reelection. Next, the essay estimates the effects of the institutional change introduced by the disclosure on April 2007 of the Basic Education Development Index (known as IDEB) and its goals on the strategic interaction at the municipality level. This institutional change potentially increased the incentives for incumbents to follow the national best practices in an attempt to signal their type to voters, thus reducing the importance of local information spillover. The same model is also tested using school inputs that are believed to improve students’ performance in place of education spending. The results show evidence for yardstick competition in education spending. Spatial auto-correlation is lower among the lame ducks and higher among the incumbents with minority support (a smaller vote margin). In addition, the institutional change introduced by the IDEB reduced the spatial interaction in education spending and input-setting, thus diminishing the importance of local information spillover. The second essay investigates the role played by the geographic distance between the poor and non-poor in the local demand for income redistribution. In particular, the study provides an empirical test of the geographically limited altruism model proposed in Pauly (1973), incorporating the possibility of participation costs associated with the provision of transfers (Van de Wale, 1998). First, the discussion is motivated by allowing for an “iceberg cost” of participation in the programs for the poor individuals in Pauly’s original model. Next, using data from the 2000 Brazilian Census and a panel of municipalities based on the National Household Sample Survey (PNAD) from 2001 to 2007, all the distance-related explanatory variables indicate that an increased proximity between poor and non-poor is associated with better targeting of the programs (demand for redistribution). For instance, a 1-hour increase in the time spent commuting by the poor reduces the targeting by 3.158 percentage points. This result is similar to that of Ashworth, Heyndels and Smolders (2002) but is definitely not due to the program leakages. To empirically disentangle participation costs and spatially restricted altruism effects, an additional test is conducted using unique panel data based on the 2004 and 2006 PNAD, which assess the number of benefits and the average benefit value received by beneficiaries. The estimates suggest that both cost and altruism play important roles in targeting determination in Brazil, and thus, in the determination of the demand for redistribution. Lastly, the results indicate that ‘size matters’; i.e., the budget for redistribution has a positive impact on targeting. The third essay aims to empirically test the validity of the median voter model for the Brazilian case. Information on municipalities are obtained from the Population Census and the Brazilian Supreme Electoral Court for the year 2000. First, the median voter demand for local public services is estimated. The bundles of services offered by reelection candidates are identified as the expenditures realized during incumbents’ first term in office. The assumption of perfect information of candidates concerning the median demand is relaxed and a weaker hypothesis, of rational expectation, is imposed. Thus, incumbents make mistakes about the median demand that are referred to as misperception errors. Thus, at a given point in time, incumbents can provide a bundle (given by the amount of expenditures per capita) that differs from median voter’s demand for public services by a multiplicative error term, which is included in the residuals of the demand equation. Next, it is estimated the impact of the module of this misperception error on the electoral performance of incumbents using a selection models. The result suggests that the median voter model is valid for the case of Brazilian municipalities.