947 resultados para Maximum Likelihood Estimation


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This research develops an econometric framework to analyze time series processes with bounds. The framework is general enough that it can incorporate several different kinds of bounding information that constrain continuous-time stochastic processes between discretely-sampled observations. It applies to situations in which the process is known to remain within an interval between observations, by way of either a known constraint or through the observation of extreme realizations of the process. The main statistical technique employs the theory of maximum likelihood estimation. This approach leads to the development of the asymptotic distribution theory for the estimation of the parameters in bounded diffusion models. The results of this analysis present several implications for empirical research. The advantages are realized in the form of efficiency gains, bias reduction and in the flexibility of model specification. A bias arises in the presence of bounding information that is ignored, while it is mitigated within this framework. An efficiency gain arises, in the sense that the statistical methods make use of conditioning information, as revealed by the bounds. Further, the specification of an econometric model can be uncoupled from the restriction to the bounds, leaving the researcher free to model the process near the bound in a way that avoids bias from misspecification. One byproduct of the improvements in model specification is that the more precise model estimation exposes other sources of misspecification. Some processes reveal themselves to be unlikely candidates for a given diffusion model, once the observations are analyzed in combination with the bounding information. A closer inspection of the theoretical foundation behind diffusion models leads to a more general specification of the model. This approach is used to produce a set of algorithms to make the model computationally feasible and more widely applicable. Finally, the modeling framework is applied to a series of interest rates, which, for several years, have been constrained by the lower bound of zero. The estimates from a series of diffusion models suggest a substantial difference in estimation results between models that ignore bounds and the framework that takes bounding information into consideration.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A significant problem in the collection of responses to potentially sensitive questions, such as relating to illegal, immoral or embarrassing activities, is non-sampling error due to refusal to respond or false responses. Eichhorn & Hayre (1983) suggested the use of scrambled responses to reduce this form of bias. This paper considers a linear regression model in which the dependent variable is unobserved but for which the sum or product with a scrambling random variable of known distribution, is known. The performance of two likelihood-based estimators is investigated, namely of a Bayesian estimator achieved through a Markov chain Monte Carlo (MCMC) sampling scheme, and a classical maximum-likelihood estimator. These two estimators and an estimator suggested by Singh, Joarder & King (1996) are compared. Monte Carlo results show that the Bayesian estimator outperforms the classical estimators in almost all cases, and the relative performance of the Bayesian estimator improves as the responses become more scrambled.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work aims at proposing the use of the evolutionary computation methodology in order to jointly solve the multiuser channel estimation (MuChE) and detection problems at its maximum-likelihood, both related to the direct sequence code division multiple access (DS/CDMA). The effectiveness of the proposed heuristic approach is proven by comparing performance and complexity merit figures with that obtained by traditional methods found in literature. Simulation results considering genetic algorithm (GA) applied to multipath, DS/CDMA and MuChE and multi-user detection (MuD) show that the proposed genetic algorithm multi-user channel estimation (GAMuChE) yields a normalized mean square error estimation (nMSE) inferior to 11%, under slowly varying multipath fading channels, large range of Doppler frequencies and medium system load, it exhibits lower complexity when compared to both maximum likelihood multi-user channel estimation (MLMuChE) and gradient descent method (GrdDsc). A near-optimum multi-user detector (MuD) based on the genetic algorithm (GAMuD), also proposed in this work, provides a significant reduction in the computational complexity when compared to the optimum multi-user detector (OMuD). In addition, the complexity of the GAMuChE and GAMuD algorithms were (jointly) analyzed in terms of number of operations necessary to reach the convergence, and compared to other jointly MuChE and MuD strategies. The joint GAMuChE-GAMuD scheme can be regarded as a promising alternative for implementing third-generation (3G) and fourth-generation (4G) wireless systems in the near future. Copyright (C) 2010 John Wiley & Sons, Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this study, regression models are evaluated for grouped survival data when the effect of censoring time is considered in the model and the regression structure is modeled through four link functions. The methodology for grouped survival data is based on life tables, and the times are grouped in k intervals so that ties are eliminated. Thus, the data modeling is performed by considering the discrete models of lifetime regression. The model parameters are estimated by using the maximum likelihood and jackknife methods. To detect influential observations in the proposed models, diagnostic measures based on case deletion, which are denominated global influence, and influence measures based on small perturbations in the data or in the model, referred to as local influence, are used. In addition to those measures, the local influence and the total influential estimate are also employed. Various simulation studies are performed and compared to the performance of the four link functions of the regression models for grouped survival data for different parameter settings, sample sizes and numbers of intervals. Finally, a data set is analyzed by using the proposed regression models. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We set up a dynamic model of firm investment in which liquidity constraintsenter explicity into the firm's maximization problem. The optimal policyrules are incorporated into a maximum likelihood procedure which estimatesthe structural parameters of the model. Investment is positively related tothe firm's internal financial position when the firm is relatively poor. This relationship disappears for wealthy firms, which can reach theirdesired level of investment. Borrowing is an increasing function of financial position for poor firms. This relationship is reversed as a firm's financial position improves, and large firms hold little debt.Liquidity constrained firms may be unused credits lines and the capacity toinvest further if they desire. However the fear that liquidity constraintswill become binding in the future induces them to invest only when internalresources increase.We estimate the structural parameters of the model and use them to quantifythe importance of liquidity constraints on firms' investment. We find thatliquidity constraints matter significantly for the investment decisions of firms. If firms can finance investment by issuing fresh equity, rather than with internal funds or debt, average capital stock is almost 35% higher overa period of 20 years. Transitory shocks to internal funds have a sustained effect on the capital stock. This effect lasts for several periods and ismore persistent for small firms than for large firms. A 10% negative shock to firm fundamentals reduces the capital stock of firms which face liquidityconstraints by almost 8% over a period as opposed to only 3.5% for firms which do not face these constraints.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Standard indirect Inference (II) estimators take a given finite-dimensional statistic, Z_{n} , and then estimate the parameters by matching the sample statistic with the model-implied population moment. We here propose a novel estimation method that utilizes all available information contained in the distribution of Z_{n} , not just its first moment. This is done by computing the likelihood of Z_{n}, and then estimating the parameters by either maximizing the likelihood or computing the posterior mean for a given prior of the parameters. These are referred to as the maximum indirect likelihood (MIL) and Bayesian Indirect Likelihood (BIL) estimators, respectively. We show that the IL estimators are first-order equivalent to the corresponding moment-based II estimator that employs the optimal weighting matrix. However, due to higher-order features of Z_{n} , the IL estimators are higher order efficient relative to the standard II estimator. The likelihood of Z_{n} will in general be unknown and so simulated versions of IL estimators are developed. Monte Carlo results for a structural auction model and a DSGE model show that the proposed estimators indeed have attractive finite sample properties.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A simple model is proposed, using the method of maximum likelihood to estimate malformation frequencies in racial groups based on data obtained from hospital services. This model uses the proportions of racial admixture, and the observed malformation frequency. It was applied to two defects: postaxial polydactyly and cleft lip, the frequencies of which are recognizedly heterogeneous among racial groups. The frequencies estimated in each racial group were those expected for these malformations, which proves the applicability of the method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The attached file is created with Scientific Workplace Latex

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper proposes different estimators for the parameters of SemiPareto and Pareto autoregressive minification processes The asymptotic properties of the estimators are established by showing that the SemiPareto process is α-mixing. Asymptotic variances of different moment and maximum likelihood estimators are compared.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Urbanization refers to the process in which an increasing proportion of a population lives in cities and suburbs. Urbanization fuels the alteration of the Land use/Land cover pattern of the region including increase in built-up area, leading to imperviousness of the ground surface. With increasing urbanization and population pressures; the impervious areas in the cities are increasing fast. An impervious surface refers to an anthropogenic ally modified surface that prevents water from infiltrating into the soil. Surface imperviousness mapping is important for the studies related to water cycling, water quality, soil erosion, flood water drainage, non-point source pollution, urban heat island effect and urban hydrology. The present study estimates the Total Impervious Area (TIA) of the city of Kochi using high resolution satellite image (LISS IV, 5m. resolution). Additionally the study maps the Effective Impervious Area (EIA) by coupling the capabilities of GIS and Remote Sensing. Land use/Land cover map of the study area was prepared from the LISS IV image acquired for the year 2012. The classes were merged to prepare a map showing pervious and impervious area. Supervised Maximum Likelihood Classification (Supervised MLC),which is a simple but accurate method for image classification, is used in calculating TIA and an overall classification accuracy of 86.33% was obtained. Water bodies are 100% pervious, whereas urban built up area are 100% impervious. Further based on percentage of imperviousness, the Total Impervious Area is categorized into various classes

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we focus on the problem of estimating a bounded density using a finite combination of densities from a given class. We consider the Maximum Likelihood Procedure (MLE) and the greedy procedure described by Li and Barron. Approximation and estimation bounds are given for the above methods. We extend and improve upon the estimation results of Li and Barron, and in particular prove an $O(\\frac{1}{\\sqrt{n}})$ bound on the estimation error which does not depend on the number of densities in the estimated combination.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We introduce a procedure for association based analysis of nuclear families that allows for dichotomous and more general measurements of phenotype and inclusion of covariate information. Standard generalized linear models are used to relate phenotype and its predictors. Our test procedure, based on the likelihood ratio, unifies the estimation of all parameters through the likelihood itself and yields maximum likelihood estimates of the genetic relative risk and interaction parameters. Our method has advantages in modelling the covariate and gene-covariate interaction terms over recently proposed conditional score tests that include covariate information via a two-stage modelling approach. We apply our method in a study of human systemic lupus erythematosus and the C-reactive protein that includes sex as a covariate.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Estimation of population size with missing zero-class is an important problem that is encountered in epidemiological assessment studies. Fitting a Poisson model to the observed data by the method of maximum likelihood and estimation of the population size based on this fit is an approach that has been widely used for this purpose. In practice, however, the Poisson assumption is seldom satisfied. Zelterman (1988) has proposed a robust estimator for unclustered data that works well in a wide class of distributions applicable for count data. In the work presented here, we extend this estimator to clustered data. The estimator requires fitting a zero-truncated homogeneous Poisson model by maximum likelihood and thereby using a Horvitz-Thompson estimator of population size. This was found to work well, when the data follow the hypothesized homogeneous Poisson model. However, when the true distribution deviates from the hypothesized model, the population size was found to be underestimated. In the search of a more robust estimator, we focused on three models that use all clusters with exactly one case, those clusters with exactly two cases and those with exactly three cases to estimate the probability of the zero-class and thereby use data collected on all the clusters in the Horvitz-Thompson estimator of population size. Loss in efficiency associated with gain in robustness was examined based on a simulation study. As a trade-off between gain in robustness and loss in efficiency, the model that uses data collected on clusters with at most three cases to estimate the probability of the zero-class was found to be preferred in general. In applications, we recommend obtaining estimates from all three models and making a choice considering the estimates from the three models, robustness and the loss in efficiency. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper considers the problem of estimation when one of a number of populations, assumed normal with known common variance, is selected on the basis of it having the largest observed mean. Conditional on selection of the population, the observed mean is a biased estimate of the true mean. This problem arises in the analysis of clinical trials in which selection is made between a number of experimental treatments that are compared with each other either with or without an additional control treatment. Attempts to obtain approximately unbiased estimates in this setting have been proposed by Shen [2001. An improved method of evaluating drug effect in a multiple dose clinical trial. Statist. Medicine 20, 1913–1929] and Stallard and Todd [2005. Point estimates and confidence regions for sequential trials involving selection. J. Statist. Plann. Inference 135, 402–419]. This paper explores the problem in the simple setting in which two experimental treatments are compared in a single analysis. It is shown that in this case the estimate of Stallard and Todd is the maximum-likelihood estimate (m.l.e.), and this is compared with the estimate proposed by Shen. In particular, it is shown that the m.l.e. has infinite expectation whatever the true value of the mean being estimated. We show that there is no conditionally unbiased estimator, and propose a new family of approximately conditionally unbiased estimators, comparing these with the estimators suggested by Shen.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We obtain adjustments to the profile likelihood function in Weibull regression models with and without censoring. Specifically, we consider two different modified profile likelihoods: (i) the one proposed by Cox and Reid [Cox, D.R. and Reid, N., 1987, Parameter orthogonality and approximate conditional inference. Journal of the Royal Statistical Society B, 49, 1-39.], and (ii) an approximation to the one proposed by Barndorff-Nielsen [Barndorff-Nielsen, O.E., 1983, On a formula for the distribution of the maximum likelihood estimator. Biometrika, 70, 343-365.], the approximation having been obtained using the results by Fraser and Reid [Fraser, D.A.S. and Reid, N., 1995, Ancillaries and third-order significance. Utilitas Mathematica, 47, 33-53.] and by Fraser et al. [Fraser, D.A.S., Reid, N. and Wu, J., 1999, A simple formula for tail probabilities for frequentist and Bayesian inference. Biometrika, 86, 655-661.]. We focus on point estimation and likelihood ratio tests on the shape parameter in the class of Weibull regression models. We derive some distributional properties of the different maximum likelihood estimators and likelihood ratio tests. The numerical evidence presented in the paper favors the approximation to Barndorff-Nielsen`s adjustment.