984 resultados para Asymptotic Normality
Resumo:
This paper develops nonparametric tests of independence between two stationary stochastic processes. The testing strategy boils down to gauging the closeness between the joint and the product of the marginal stationary densities. For that purpose, I take advantage of a generalized entropic measure so as to build a class of nonparametric tests of independence. Asymptotic normality and local power are derived using the functional delta method for kernels, whereas finite sample properties are investigated through Monte Carlo simulations.
Resumo:
In this paper, we propose a class of ACD-type models that accommodates overdispersion, intermittent dynamics, multiple regimes, and sign and size asymmetries in financial durations. In particular, our functional coefficient autoregressive conditional duration (FC-ACD) model relies on a smooth-transition autoregressive specification. The motivation lies on the fact that the latter yields a universal approximation if one lets the number of regimes grows without bound. After establishing that the sufficient conditions for strict stationarity do not exclude explosive regimes, we address model identifiability as well as the existence, consistency, and asymptotic normality of the quasi-maximum likelihood (QML) estimator for the FC-ACD model with a fixed number of regimes. In addition, we also discuss how to consistently estimate using a sieve approach a semiparametric variant of the FC-ACD model that takes the number of regimes to infinity. An empirical illustration indicates that our functional coefficient model is flexible enough to model IBM price durations.
Resumo:
This paper presents semiparametric estimators for treatment effects parameters when selection to treatment is based on observable characteristics. The parameters of interest in this paper are those that capture summarized distributional effects of the treatment. In particular, the focus is on the impact of the treatment calculated by differences in inequality measures of the potential outcomes of receiving and not receiving the treatment. These differences are called here inequality treatment effects. The estimation procedure involves a first non-parametric step in which the probability of receiving treatment given covariates, the propensity-score, is estimated. Using the reweighting method to estimate parameters of the marginal distribution of potential outcomes, in the second step weighted sample versions of inequality measures are.computed. Calculations of semiparametric effciency bounds for inequality treatment effects parameters are presented. Root-N consistency, asymptotic normality, and the achievement of the semiparametric efficiency bound are shown for the semiparametric estimators proposed. A Monte Carlo exercise is performed to investigate the behavior in finite samples of the estimator derived in the paper.
Resumo:
This paper presents calculations of semiparametric efficiency bounds for quantile treatment effects parameters when se1ection to treatment is based on observable characteristics. The paper also presents three estimation procedures forthese parameters, alI ofwhich have two steps: a nonparametric estimation and a computation ofthe difference between the solutions of two distinct minimization problems. Root-N consistency, asymptotic normality, and the achievement ofthe semiparametric efficiency bound is shown for one ofthe three estimators. In the final part ofthe paper, an empirical application to a job training program reveals the importance of heterogeneous treatment effects, showing that for this program the effects are concentrated in the upper quantiles ofthe earnings distribution.
Resumo:
In this work, the paper of Campos and Dorea [3] was detailed. In that article a Kernel Estimator was applied to a sequence of random variables with general state space, which were independent and identicaly distributed. In chapter 2, the estimator´s properties such as asymptotic unbiasedness, consistency in quadratic mean, strong consistency and asymptotic normality were verified. In chapter 3, using R software, numerical experiments were developed in order to give a visual idea of the estimate process
Resumo:
In dieser Arbeit geht es um die Schätzung von Parametern in zeitdiskreten ergodischen Markov-Prozessen im allgemeinen und im CIR-Modell im besonderen. Beim CIR-Modell handelt es sich um eine stochastische Differentialgleichung, die von Cox, Ingersoll und Ross (1985) zur Beschreibung der Dynamik von Zinsraten vorgeschlagen wurde. Problemstellung ist die Schätzung der Parameter des Drift- und des Diffusionskoeffizienten aufgrund von äquidistanten diskreten Beobachtungen des CIR-Prozesses. Nach einer kurzen Einführung in das CIR-Modell verwenden wir die insbesondere von Bibby und Sørensen untersuchte Methode der Martingal-Schätzfunktionen und -Schätzgleichungen, um das Problem der Parameterschätzung in ergodischen Markov-Prozessen zunächst ganz allgemein zu untersuchen. Im Anschluss an Untersuchungen von Sørensen (1999) werden hinreichende Bedingungen (im Sinne von Regularitätsvoraussetzungen an die Schätzfunktion) für die Existenz, starke Konsistenz und asymptotische Normalität von Lösungen einer Martingal-Schätzgleichung angegeben. Angewandt auf den Spezialfall der Likelihood-Schätzung stellen diese Bedingungen zugleich lokal-asymptotische Normalität des Modells sicher. Ferner wird ein einfaches Kriterium für Godambe-Heyde-Optimalität von Schätzfunktionen angegeben und skizziert, wie dies in wichtigen Spezialfällen zur expliziten Konstruktion optimaler Schätzfunktionen verwendet werden kann. Die allgemeinen Resultate werden anschließend auf das diskretisierte CIR-Modell angewendet. Wir analysieren einige von Overbeck und Rydén (1997) vorgeschlagene Schätzer für den Drift- und den Diffusionskoeffizienten, welche als Lösungen quadratischer Martingal-Schätzfunktionen definiert sind, und berechnen das optimale Element in dieser Klasse. Abschließend verallgemeinern wir Ergebnisse von Overbeck und Rydén (1997), indem wir die Existenz einer stark konsistenten und asymptotisch normalen Lösung der Likelihood-Gleichung zeigen und lokal-asymptotische Normalität für das CIR-Modell ohne Einschränkungen an den Parameterraum beweisen.
Resumo:
The problem of estimating the numbers of motor units N in a muscle is embedded in a general stochastic model using the notion of thinning from point process theory. In the paper a new moment type estimator for the numbers of motor units in a muscle is denned, which is derived using random sums with independently thinned terms. Asymptotic normality of the estimator is shown and its practical value is demonstrated with bootstrap and approximative confidence intervals for a data set from a 31-year-old healthy right-handed, female volunteer. Moreover simulation results are presented and Monte-Carlo based quantiles, means, and variances are calculated for N in{300,600,1000}.
Resumo:
We propose a new method for fitting proportional hazards models with error-prone covariates. Regression coefficients are estimated by solving an estimating equation that is the average of the partial likelihood scores based on imputed true covariates. For the purpose of imputation, a linear spline model is assumed on the baseline hazard. We discuss consistency and asymptotic normality of the resulting estimators, and propose a stochastic approximation scheme to obtain the estimates. The algorithm is easy to implement, and reduces to the ordinary Cox partial likelihood approach when the measurement error has a degenerative distribution. Simulations indicate high efficiency and robustness. We consider the special case where error-prone replicates are available on the unobserved true covariates. As expected, increasing the number of replicate for the unobserved covariates increases efficiency and reduces bias. We illustrate the practical utility of the proposed method with an Eastern Cooperative Oncology Group clinical trial where a genetic marker, c-myc expression level, is subject to measurement error.
Resumo:
Markov chain Monte Carlo is a method of producing a correlated sample in order to estimate features of a complicated target distribution via simple ergodic averages. A fundamental question in MCMC applications is when should the sampling stop? That is, when are the ergodic averages good estimates of the desired quantities? We consider a method that stops the MCMC sampling the first time the width of a confidence interval based on the ergodic averages is less than a user-specified value. Hence calculating Monte Carlo standard errors is a critical step in assessing the output of the simulation. In particular, we consider the regenerative simulation and batch means methods of estimating the variance of the asymptotic normal distribution. We describe sufficient conditions for the strong consistency and asymptotic normality of both methods and investigate their finite sample properties in a variety of examples.
Resumo:
The distribution of the number of heterozygous loci in two randomly chosen gametes or in a random diploid zygote provides information regarding the nonrandom association of alleles among different genetic loci. Two alternative statistics may be employed for detection of nonrandom association of genes of different loci when observations are made on these distributions: observed variance of the number of heterozygous loci (s2k) and a goodness-of-fit criterion (X2) to contrast the observed distribution with that expected under the hypothesis of random association of genes. It is shown, by simulation, that s2k is statistically more efficient than X2 to detect a given extent of nonrandom association. Asymptotic normality of s2k is justified, and X2 is shown to follow a chi-square (chi 2) distribution with partial loss of degrees of freedom arising because of estimation of parameters from the marginal gene frequency data. Whenever direct evaluations of linkage disequilibrium values are possible, tests based on maximum likelihood estimators of linkage disequilibria require a smaller sample size (number of zygotes or gametes) to detect a given level of nonrandom association in comparison with that required if such tests are conducted on the basis of s2k. Summarization of multilocus genotype (or haplotype) data, into the different number of heterozygous loci classes, thus, amounts to appreciable loss of information.
Resumo:
Several tests for the comparison of different groups in the randomized complete block design exist. However, there is a lack of robust estimators for the location difference between one group and all the others on the original scale. The relative marginal effects are commonly used in this situation, but they are more difficult to interpret and use by less experienced people because of the different scale. In this paper two nonparametric estimators for the comparison of one group against the others in the randomized complete block design will be presented. Theoretical results such as asymptotic normality, consistency, translation invariance, scale preservation, unbiasedness, and median unbiasedness are derived. The finite sample behavior of these estimators is derived by simulations of different scenarios. In addition, possible confidence intervals with these estimators are discussed and their behavior derived also by simulations.
Resumo:
The history of the logistic function since its introduction in 1838 is reviewed, and the logistic model for a polychotomous response variable is presented with a discussion of the assumptions involved in its derivation and use. Following this, the maximum likelihood estimators for the model parameters are derived along with a Newton-Raphson iterative procedure for evaluation. A rigorous mathematical derivation of the limiting distribution of the maximum likelihood estimators is then presented using a characteristic function approach. An appendix with theorems on the asymptotic normality of sample sums when the observations are not identically distributed, with proofs, supports the presentation on asymptotic properties of the maximum likelihood estimators. Finally, two applications of the model are presented using data from the Hypertension Detection and Follow-up Program, a prospective, population-based, randomized trial of treatment for hypertension. The first application compares the risk of five-year mortality from cardiovascular causes with that from noncardiovascular causes; the second application compares risk factors for fatal or nonfatal coronary heart disease with those for fatal or nonfatal stroke. ^
Resumo:
In this paper, we indicate how integer-valued autoregressive time series Ginar(d) of ordre d, d ≥ 1, are simple functionals of multitype branching processes with immigration. This allows the derivation of a simple criteria for the existence of a stationary distribution of the time series, thus proving and extending some results by Al-Osh and Alzaid [1], Du and Li [9] and Gauthier and Latour [11]. One can then transfer results on estimation in subcritical multitype branching processes to stationary Ginar(d) and get consistency and asymptotic normality for the corresponding estimators. The technique covers autoregressive moving average time series as well.
Resumo:
2000 Mathematics Subject Classification: 60J80.