924 resultados para Miller, Michael, d. 1739
Resumo:
This paper shows how a high level matrix programming language may be used to perform Monte Carlo simulation, bootstrapping, estimation by maximum likelihood and GMM, and kernel regression in parallel on symmetric multiprocessor computers or clusters of workstations. The implementation of parallelization is done in a way such that an investigator may use the programs without any knowledge of parallel programming. A bootable CD that allows rapid creation of a cluster for parallel computing is introduced. Examples show that parallelization can lead to important reductions in computational time. Detailed discussion of how the Monte Carlo problem was parallelized is included as an example for learning to write parallel programs for Octave.
Resumo:
This note shows how to set up a Linux cluster for MPI parallel processing using ParallelKnoppix, a bootable CD.
Resumo:
This note describes ParallelKnoppix, a bootable CD that allows creation of a Linux cluster in very little time. An experienced user can create a cluster ready to execute MPI programs in less than 10 minutes. The computers used may be heterogeneous machines, of the IA-32 architecture. When the cluster is shut down, all machines except one are in their original state, and the last can be returned to its original state by deleting a directory. The system thus provides a means of using non-dedicated computers to create a cluster. An example session is documented.
Resumo:
The paper documents MINTOOLKIT for GNU Octave. MINTOOLKIT provides functions for minimization and numeric differentiation. The main algorithms are BFGS, LBFGS, and simulated annealing. Examples are given.
Resumo:
This is a project to develop a document for teaching graduate econometrics that is "open source", specifically, licensed as GNU GPL. That is, anyone can access the document in editable form, and can modify it, as long as they make their modifications available. This allows for personalization, as well as a simple way to make contributions and error corrections. The hope is that people preparing to teach econometrics for the first time might find it useful, and eventually be motivated to contribute back to the project. The central document is something between a set of lecture notes and a text book. It's not as terse as lecture notes, but not as complete or well-referenced as a text book. Of course, the document is constantly evolving, and you are welcome to modify it as you like. The document contains (at least when viewed in HTML or PDF form) hyperlinks to example programs written using the GNU/Octave language. The document itself is written using the LyX word processor. LyX documents can be exported as LaTeX, so the system is quite portable.
Resumo:
The Hausman (1978) test is based on the vector of differences of two estimators. It is usually assumed that one of the estimators is fully efficient, since this simplifies calculation of the test statistic. However, this assumption limits the applicability of the test, since widely used estimators such as the generalized method of moments (GMM) or quasi maximum likelihood (QML) are often not fully efficient. This paper shows that the test may easily be implemented, using well-known methods, when neither estimator is efficient. To illustrate, we present both simulation results as well as empirical results for utilization of health care services.
Resumo:
We review recent likelihood-based approaches to modeling demand for medical care. A semi-nonparametric model along the lines of Cameron and Johansson's Poisson polynomial model, but using a negative binomial baseline model, is introduced. We apply these models, as well a semiparametric Poisson, hurdle semiparametric Poisson, and finite mixtures of negative binomial models to six measures of health care usage taken from the Medical Expenditure Panel survey. We conclude that most of the models lead to statistically similar results, both in terms of information criteria and conditional and unconditional prediction. This suggests that applied researchers may not need to be overly concerned with the choice of which of these models they use to analyze data on health care demand.
Resumo:
We re-examine the theoretical concept of a production function for cognitive achievement, and argue that an indirect production function that depends upon the variables that constrain parents' choices is both moretractable from an econometric point of view, and more interesting from an economic point of view than is a direct production function that depends upon a detailed list of direct inputs such as number of books in the household. We estimate flexible econometric models of indirect production functions for two achievement measures from the Woodcock-Johnson Revised battery, using data from two waves of the Child Development Supplement to the PSID. Elasticities of achievement measures with respect to family income and parents' educational levels are positive and significant. Gaps between scores of black and white children narrow or remain constant as children grow older, a result that differs from previous findings in the literature. The elasticities of achievement scores with respect to family income are substantially higher for children of black families, and there are some notable difference in elasticities with respect to parents' educational levels across blacks and whites.
Resumo:
Given a model that can be simulated, conditional moments at a trial parameter value can be calculated with high accuracy by applying kernel smoothing methods to a long simulation. With such conditional moments in hand, standard method of moments techniques can be used to estimate the parameter. Since conditional moments are calculated using kernel smoothing rather than simple averaging, it is not necessary that the model be simulable subject to the conditioning information that is used to define the moment conditions. For this reason, the proposed estimator is applicable to general dynamic latent variable models. Monte Carlo results show that the estimator performs well in comparison to other estimators that have been proposed for estimation of general DLV models.
Resumo:
PelicanHPC is a rapid (around 5 minutes, when you know what you're doing) means of setting up a high performance computing (HPC) cluster for parallel computing using MPI. This tutorial gives a basic description of what PelicanHPC does, addresses how to use the released CD images to set up a HPC cluster, and gives some basic examples of usage.
Resumo:
Consider a model with parameter phi, and an auxiliary model with parameter theta. Let phi be a randomly sampled from a given density over the known parameter space. Monte Carlo methods can be used to draw simulated data and compute the corresponding estimate of theta, say theta_tilde. A large set of tuples (phi, theta_tilde) can be generated in this manner. Nonparametric methods may be use to fit the function E(phi|theta_tilde=a), using these tuples. It is proposed to estimate phi using the fitted E(phi|theta_tilde=theta_hat), where theta_hat is the auxiliary estimate, using the real sample data. This is a consistent and asymptotically normally distributed estimator, under certain assumptions. Monte Carlo results for dynamic panel data and vector autoregressions show that this estimator can have very attractive small sample properties. Confidence intervals can be constructed using the quantiles of the phi for which theta_tilde is close to theta_hat. Such confidence intervals are found to have very accurate coverage.
Resumo:
Abstract. Given a model that can be simulated, conditional moments at a trial parameter value can be calculated with high accuracy by applying kernel smoothing methods to a long simulation. With such conditional moments in hand, standard method of moments techniques can be used to estimate the parameter. Because conditional moments are calculated using kernel smoothing rather than simple averaging, it is not necessary that the model be simulable subject to the conditioning information that is used to define the moment conditions. For this reason, the proposed estimator is applicable to general dynamic latent variable models. It is shown that as the number of simulations diverges, the estimator is consistent and a higher-order expansion reveals the stochastic difference between the infeasible GMM estimator based on the same moment conditions and the simulated version. In particular, we show how to adjust standard errors to account for the simulations. Monte Carlo results show how the estimator may be applied to a range of dynamic latent variable (DLV) models, and that it performs well in comparison to several other estimators that have been proposed for DLV models.
Resumo:
This is a guide that explains how to use software that implements the simulated nonparametric moments (SNM) estimator proposed by Creel and Kristensen (2009). The guide shows how results of that paper may easily be replicated, and explains how to install and use the software for estimation of simulable econometric models.
Resumo:
Given a sample from a fully specified parametric model, let Zn be a given finite-dimensional statistic - for example, an initial estimator or a set of sample moments. We propose to (re-)estimate the parameters of the model by maximizing the likelihood of Zn. We call this the maximum indirect likelihood (MIL) estimator. We also propose a computationally tractable Bayesian version of the estimator which we refer to as a Bayesian Indirect Likelihood (BIL) estimator. In most cases, the density of the statistic will be of unknown form, and we develop simulated versions of the MIL and BIL estimators. We show that the indirect likelihood estimators are consistent and asymptotically normally distributed, with the same asymptotic variance as that of the corresponding efficient two-step GMM estimator based on the same statistic. However, our likelihood-based estimators, by taking into account the full finite-sample distribution of the statistic, are higher order efficient relative to GMM-type estimators. Furthermore, in many cases they enjoy a bias reduction property similar to that of the indirect inference estimator. Monte Carlo results for a number of applications including dynamic and nonlinear panel data models, a structural auction model and two DSGE models show that the proposed estimators indeed have attractive finite sample properties.
Resumo:
BACKGROUND: Quitting tobacco or alcohol use has been reported to reduce the head and neck cancer risk in previous studies. However, it is unclear how many years must pass following cessation of these habits before the risk is reduced, and whether the risk ultimately declines to the level of never smokers or never drinkers. METHODS: We pooled individual-level data from case-control studies in the International Head and Neck Cancer Epidemiology Consortium. Data were available from 13 studies on drinking cessation (9167 cases and 12 593 controls), and from 17 studies on smoking cessation (12 040 cases and 16 884 controls). We estimated the effect of quitting smoking and drinking on the risk of head and neck cancer and its subsites, by calculating odds ratios (ORs) using logistic regression models. RESULTS: Quitting tobacco smoking for 1-4 years resulted in a head and neck cancer risk reduction [OR 0.70, confidence interval (CI) 0.61-0.81 compared with current smoking], with the risk reduction due to smoking cessation after >/=20 years (OR 0.23, CI 0.18-0.31), reaching the level of never smokers. For alcohol use, a beneficial effect on the risk of head and neck cancer was only observed after >/=20 years of quitting (OR 0.60, CI 0.40-0.89 compared with current drinking), reaching the level of never drinkers. CONCLUSIONS: Our results support that cessation of tobacco smoking and cessation of alcohol drinking protect against the development of head and neck cancer.