977 resultados para Instrumental-variable Methods
Resumo:
We discuss statistical inference problems associated with identification and testability in econometrics, and we emphasize the common nature of the two issues. After reviewing the relevant statistical notions, we consider in turn inference in nonparametric models and recent developments on weakly identified models (or weak instruments). We point out that many hypotheses, for which test procedures are commonly proposed, are not testable at all, while some frequently used econometric methods are fundamentally inappropriate for the models considered. Such situations lead to ill-defined statistical problems and are often associated with a misguided use of asymptotic distributional results. Concerning nonparametric hypotheses, we discuss three basic problems for which such difficulties occur: (1) testing a mean (or a moment) under (too) weak distributional assumptions; (2) inference under heteroskedasticity of unknown form; (3) inference in dynamic models with an unlimited number of parameters. Concerning weakly identified models, we stress that valid inference should be based on proper pivotal functions —a condition not satisfied by standard Wald-type methods based on standard errors — and we discuss recent developments in this field, mainly from the viewpoint of building valid tests and confidence sets. The techniques discussed include alternative proposed statistics, bounds, projection, split-sampling, conditioning, Monte Carlo tests. The possibility of deriving a finite-sample distributional theory, robustness to the presence of weak instruments, and robustness to the specification of a model for endogenous explanatory variables are stressed as important criteria assessing alternative procedures.
Resumo:
Ma thèse est composée de trois essais sur l'inférence par le bootstrap à la fois dans les modèles de données de panel et les modèles à grands nombres de variables instrumentales #VI# dont un grand nombre peut être faible. La théorie asymptotique n'étant pas toujours une bonne approximation de la distribution d'échantillonnage des estimateurs et statistiques de tests, je considère le bootstrap comme une alternative. Ces essais tentent d'étudier la validité asymptotique des procédures bootstrap existantes et quand invalides, proposent de nouvelles méthodes bootstrap valides. Le premier chapitre #co-écrit avec Sílvia Gonçalves# étudie la validité du bootstrap pour l'inférence dans un modèle de panel de données linéaire, dynamique et stationnaire à effets fixes. Nous considérons trois méthodes bootstrap: le recursive-design bootstrap, le fixed-design bootstrap et le pairs bootstrap. Ces méthodes sont des généralisations naturelles au contexte des panels des méthodes bootstrap considérées par Gonçalves et Kilian #2004# dans les modèles autorégressifs en séries temporelles. Nous montrons que l'estimateur MCO obtenu par le recursive-design bootstrap contient un terme intégré qui imite le biais de l'estimateur original. Ceci est en contraste avec le fixed-design bootstrap et le pairs bootstrap dont les distributions sont incorrectement centrées à zéro. Cependant, le recursive-design bootstrap et le pairs bootstrap sont asymptotiquement valides quand ils sont appliqués à l'estimateur corrigé du biais, contrairement au fixed-design bootstrap. Dans les simulations, le recursive-design bootstrap est la méthode qui produit les meilleurs résultats. Le deuxième chapitre étend les résultats du pairs bootstrap aux modèles de panel non linéaires dynamiques avec des effets fixes. Ces modèles sont souvent estimés par l'estimateur du maximum de vraisemblance #EMV# qui souffre également d'un biais. Récemment, Dhaene et Johmans #2014# ont proposé la méthode d'estimation split-jackknife. Bien que ces estimateurs ont des approximations asymptotiques normales centrées sur le vrai paramètre, de sérieuses distorsions demeurent à échantillons finis. Dhaene et Johmans #2014# ont proposé le pairs bootstrap comme alternative dans ce contexte sans aucune justification théorique. Pour combler cette lacune, je montre que cette méthode est asymptotiquement valide lorsqu'elle est utilisée pour estimer la distribution de l'estimateur split-jackknife bien qu'incapable d'estimer la distribution de l'EMV. Des simulations Monte Carlo montrent que les intervalles de confiance bootstrap basés sur l'estimateur split-jackknife aident grandement à réduire les distorsions liées à l'approximation normale en échantillons finis. En outre, j'applique cette méthode bootstrap à un modèle de participation des femmes au marché du travail pour construire des intervalles de confiance valides. Dans le dernier chapitre #co-écrit avec Wenjie Wang#, nous étudions la validité asymptotique des procédures bootstrap pour les modèles à grands nombres de variables instrumentales #VI# dont un grand nombre peu être faible. Nous montrons analytiquement qu'un bootstrap standard basé sur les résidus et le bootstrap restreint et efficace #RE# de Davidson et MacKinnon #2008, 2010, 2014# ne peuvent pas estimer la distribution limite de l'estimateur du maximum de vraisemblance à information limitée #EMVIL#. La raison principale est qu'ils ne parviennent pas à bien imiter le paramètre qui caractérise l'intensité de l'identification dans l'échantillon. Par conséquent, nous proposons une méthode bootstrap modifiée qui estime de facon convergente cette distribution limite. Nos simulations montrent que la méthode bootstrap modifiée réduit considérablement les distorsions des tests asymptotiques de type Wald #$t$# dans les échantillons finis, en particulier lorsque le degré d'endogénéité est élevé.
Resumo:
Several methods have been suggested to estimate non-linear models with interaction terms in the presence of measurement error. Structural equation models eliminate measurement error bias, but require large samples. Ordinary least squares regression on summated scales, regression on factor scores and partial least squares are appropriate for small samples but do not correct measurement error bias. Two stage least squares regression does correct measurement error bias but the results strongly depend on the instrumental variable choice. This article discusses the old disattenuated regression method as an alternative for correcting measurement error in small samples. The method is extended to the case of interaction terms and is illustrated on a model that examines the interaction effect of innovation and style of use of budgets on business performance. Alternative reliability estimates that can be used to disattenuate the estimates are discussed. A comparison is made with the alternative methods. Methods that do not correct for measurement error bias perform very similarly and considerably worse than disattenuated regression
Resumo:
BACKGROUND: Low plasma 25-hydroxyvitamin D (25[OH]D) concentration is associated with high arterial blood pressure and hypertension risk, but whether this association is causal is unknown. We used a mendelian randomisation approach to test whether 25(OH)D concentration is causally associated with blood pressure and hypertension risk. METHODS: In this mendelian randomisation study, we generated an allele score (25[OH]D synthesis score) based on variants of genes that affect 25(OH)D synthesis or substrate availability (CYP2R1 and DHCR7), which we used as a proxy for 25(OH)D concentration. We meta-analysed data for up to 108 173 individuals from 35 studies in the D-CarDia collaboration to investigate associations between the allele score and blood pressure measurements. We complemented these analyses with previously published summary statistics from the International Consortium on Blood Pressure (ICBP), the Cohorts for Heart and Aging Research in Genomic Epidemiology (CHARGE) consortium, and the Global Blood Pressure Genetics (Global BPGen) consortium. FINDINGS: In phenotypic analyses (up to n=49 363), increased 25(OH)D concentration was associated with decreased systolic blood pressure (β per 10% increase, -0·12 mm Hg, 95% CI -0·20 to -0·04; p=0·003) and reduced odds of hypertension (odds ratio [OR] 0·98, 95% CI 0·97-0·99; p=0·0003), but not with decreased diastolic blood pressure (β per 10% increase, -0·02 mm Hg, -0·08 to 0·03; p=0·37). In meta-analyses in which we combined data from D-CarDia and the ICBP (n=146 581, after exclusion of overlapping studies), each 25(OH)D-increasing allele of the synthesis score was associated with a change of -0·10 mm Hg in systolic blood pressure (-0·21 to -0·0001; p=0·0498) and a change of -0·08 mm Hg in diastolic blood pressure (-0·15 to -0·02; p=0·01). When D-CarDia and consortia data for hypertension were meta-analysed together (n=142 255), the synthesis score was associated with a reduced odds of hypertension (OR per allele, 0·98, 0·96-0·99; p=0·001). In instrumental variable analysis, each 10% increase in genetically instrumented 25(OH)D concentration was associated with a change of -0·29 mm Hg in diastolic blood pressure (-0·52 to -0·07; p=0·01), a change of -0·37 mm Hg in systolic blood pressure (-0·73 to 0·003; p=0·052), and an 8·1% decreased odds of hypertension (OR 0·92, 0·87-0·97; p=0·002). INTERPRETATION: Increased plasma concentrations of 25(OH)D might reduce the risk of hypertension. This finding warrants further investigation in an independent, similarly powered study.
Resumo:
This paper considers two-sided tests for the parameter of an endogenous variable in an instrumental variable (IV) model with heteroskedastic and autocorrelated errors. We develop the nite-sample theory of weighted-average power (WAP) tests with normal errors and a known long-run variance. We introduce two weights which are invariant to orthogonal transformations of the instruments; e.g., changing the order in which the instruments appear. While tests using the MM1 weight can be severely biased, optimal tests based on the MM2 weight are naturally two-sided when errors are homoskedastic. We propose two boundary conditions that yield two-sided tests whether errors are homoskedastic or not. The locally unbiased (LU) condition is related to the power around the null hypothesis and is a weaker requirement than unbiasedness. The strongly unbiased (SU) condition is more restrictive than LU, but the associated WAP tests are easier to implement. Several tests are SU in nite samples or asymptotically, including tests robust to weak IV (such as the Anderson-Rubin, score, conditional quasi-likelihood ratio, and I. Andrews' (2015) PI-CLC tests) and two-sided tests which are optimal when the sample size is large and instruments are strong. We refer to the WAP-SU tests based on our weights as MM1-SU and MM2-SU tests. Dropping the restrictive assumptions of normality and known variance, the theory is shown to remain valid at the cost of asymptotic approximations. The MM2-SU test is optimal under the strong IV asymptotics, and outperforms other existing tests under the weak IV asymptotics.
Resumo:
In this work we focus on tests for the parameter of an endogenous variable in a weakly identi ed instrumental variable regressionmodel. We propose a new unbiasedness restriction for weighted average power (WAP) tests introduced by Moreira and Moreira (2013). This new boundary condition is motivated by the score e ciency under strong identi cation. It allows reducing computational costs of WAP tests by replacing the strongly unbiased condition. This latter restriction imposes, under the null hypothesis, the test to be uncorrelated to a given statistic with dimension given by the number of instruments. The new proposed boundary condition only imposes the test to be uncorrelated to a linear combination of the statistic. WAP tests under both restrictions to perform similarly numerically. We apply the di erent tests discussed to an empirical example. Using data from Yogo (2004), we assess the e ect of weak instruments on the estimation of the elasticity of inter-temporal substitution of a CCAPM model.
Resumo:
Background: As the global population is ageing, studying cognitive impairments including dementia, one of the leading causes of disability in old age worldwide, is of fundamental importance to public health. As a major transition in older age, a focus on the complex impacts of the duration, timing, and voluntariness of retirement on health is important for policy changes in the future. Longer retirement periods, as well as leaving the workforce early, have been associated with poorer health, including reduced cognitive functioning. These associations are hypothesized to differ based on gender, as well as on pre-retirement educational and occupational experiences, and on post-retirement social factors and health conditions. Methods: A cross-sectional study is conducted to determine the relationship between duration and timing of retirement and cognitive function, using data from the five sites of International Mobility in Aging Study (IMIAS). Cognitive function is assessed using the Leganes Cognitive Test (LCT) scores in 2012. Data are analyzed using multiple linear regressions. Analyses are also done by site/region separately (Canada, Latin America, and Albania). Robustness checks are done with an analysis of cognitive change from 2012 to 2014, the effect of voluntariness of retirement on cognitive function. An instrumental variable (IV) approach is also applied to the cross-sectional and longitudinal analyses as a robustness check to address the potential endogeneity of the retirement variable. Results: Descriptive statistics highlight differences between men and women, as well as between sites. In linear regression analysis, there was no relationship between timing or duration of retirement and cognitive function in 2012, when adjusting for site/region. There was no association between retirement characteristics and cognitive function in site/region/stratified analyses. In IV analysis, longer retirement and on time or late retirement was associated with lower cognitive function among men. In IV analysis, there is no relationship between retirement characteristics and cognitive function among women. Conclusions: While results of the thesis suggest a negative effect of retirement on cognitive function, especially among men, the relationship remains uncertain. A lack of power results in the inability to draw conclusions for site/region-specific analysis and site-adjusted analysis in both linear and IV regressions.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
BACKGROUND: Epidemiological studies show that high circulating cystatin C is associated with risk of cardiovascular disease (CVD), independent of creatinine-based renal function measurements. It is unclear whether this relationship is causal, arises from residual confounding, and/or is a consequence of reverse causation. OBJECTIVES: The aim of this study was to use Mendelian randomization to investigate whether cystatin C is causally related to CVD in the general population. METHODS We incorporated participant data from 16 prospective cohorts (n ¼ 76,481) with 37,126 measures of cystatin C and added genetic data from 43 studies (n ¼ 252,216) with 63,292 CVD events. We used the common variant rs911119 in CST3 as an instrumental variable to investigate the causal role of cystatin C in CVD, including coronary heart disease, ischemic stroke, and heart failure. RESULTS: Cystatin C concentrations were associated with CVD risk after adjusting for age, sex, and traditional risk factors (relative risk: 1.82 per doubling of cystatin C; 95% confidence interval [CI]: 1.56 to 2.13; p ¼ 2.12 1014). The minor allele of rs911119 was associated with decreased serum cystatin C (6.13% per allele; 95% CI: 5.75 to 6.50; p ¼ 5.95 10211), explaining 2.8% of the observed variation in cystatin C. Mendelian randomization analysis did not provide evidence for a causal role of cystatin C, with a causal relative risk for CVD of 1.00 per doubling cystatin C (95% CI: 0.82 to 1.22; p ¼ 0.994), which was statistically different from the observational estimate (p ¼ 1.6 105 ). A causal effect of cystatin C was not detected for any individual component of CVD. CONCLUSIONS: Mendelian randomization analyses did not support a causal role of cystatin C in the etiology of CVD. As such, therapeutics targeted at lowering circulating cystatin C are unlikely to be effective in preventing CVD.
Resumo:
At the beginning, this Ph.D. project led to an overview of the most common and emerging types of fraud and possible countermeasures in the olive oil sector. Furthermore, possible weaknesses in the current conformity check system for olive oil were highlighted. Among those, despite the organoleptic assessment is a fundamental tool for establishing the virgin olive oils (VOOs) quality grade, the scientific community has evidenced some drawbacks in it. In particular, the application of instrumental screening methods to support the panel test could reduce the work of sensory panels and the cost of this analysis (e.g. for industries, distributors, public and private control laboratories), permitting the increase in the number and the efficiency of the controls. On this basis, a research line called “Quantitative Panel Test” is one of the main expected outcomes of the OLEUM project that is also partially discussed in this doctoral dissertation. In this framework, analytical activities were carried out, within this PhD project, aimed to develop and validate analytical protocols for the study of the profiles in volatile compounds (VOCs) of the VOOs headspace. Specifically, two chromatographic approaches, one targeted and one semi-targeted, to determine VOCs were investigated in this doctoral thesis. The obtained results, will allow the possible establishment of concentration limits and ranges of selected volatile markers, as related to fruitiness and defects, with the aim to support the panel test in the commercial categorization of VOOs. In parallel, a rapid instrumental screening method based on the analysis of VOCs has been investigated to assist the panel test through a fast pre-classification of VOOs samples based on a known level of probability, thus increasing the efficiency of quality control.
Resumo:
In this Ph.D. project, original and innovative approaches for the quali-quantitative analysis of abuse substances, as well as therapeutic agents with abuse potential and related compounds were designed, developed and validated for application to different fields such as forensics, clinical and pharmaceutical. All the parameters involved in the developed analytical workflows were properly and accurately optimised, from sample collection to sample pretreatment up to the instrumental analysis. Advanced dried blood microsampling technologies have been developed, able of bringing several advantages to the method as a whole, such as significant reduction of solvent use, feasible storage and transportation conditions and enhancement of analyte stability. At the same time, the use of capillary blood allows to increase subject compliance and overall method applicability by exploiting such innovative technologies. Both biological and non-biological samples involved in this project were subjected to optimised pretreatment techniques developed ad-hoc for each target analyte, making also use of advanced microextraction techniques. Finally, original and advanced instrumental analytical methods have been developed based on high and ultra-high performance liquid chromatography (HPLC,UHPLC) coupled to different detection means (mainly mass spectrometry, but also electrochemical, and spectrophotometric detection for screening purpose), and on attenuated total reflectance-Fourier transform infrared spectroscopy (ATR-FTIR) for solid-state analysis. Each method has been designed to obtain highly selective, sensitive yet sustainable systems and has been validated according to international guidelines. All the methods developed herein proved to be suitable for the analysis of the compounds under investigation and may be useful tools in medicinal chemistry, pharmaceutical analysis, within clinical studies and forensic investigations.
Resumo:
The Republic of Haiti is the prime international remittances recipient country in the Latin American and Caribbean (LAC) region relative to its gross domestic product (GDP). The downside of this observation may be that this country is also the first exporter of skilled workers in the world by population size. The present research uses a zero-altered negative binomial (with logit inflation) to model households' international migration decision process, and endogenous regressors' Amemiya Generalized Least Squares method (instrumental variable Tobit, IV-Tobit) to account for selectivity and endogeneity issues in assessing the impact of remittances on labor market outcomes. Results are in line with what has been found so far in this literature in terms of a decline of labor supply in the presence of remittances. However, the impact of international remittances does not seem to be important in determining recipient households' labor participation behavior, particularly for women.
Resumo:
The effects of structural breaks in dynamic panels are more complicated than in time series models as the bias can be either negative or positive. This paper focuses on the effects of mean shifts in otherwise stationary processes within an instrumental variable panel estimation framework. We show the sources of the bias and a Monte Carlo analysis calibrated on United States bank lending data demonstrates the size of the bias for a range of auto-regressive parameters. We also propose additional moment conditions that can be used to reduce the biases caused by shifts in the mean of the data.
Resumo:
This paper proposes a novel way of testing exogeneity of an explanatory variable without any parametric assumptions in the presence of a "conditional" instrumental variable. A testable implication is derived that if an explanatory variable is endogenous, the conditional distribution of the outcome given the endogenous variable is not independent of its instrumental variable(s). The test rejects the null hypothesis with probability one if the explanatory variable is endogenous and it detects alternatives converging to the null at a rate n..1=2:We propose a consistent nonparametric bootstrap test to implement this testable implication. We show that the proposed bootstrap test can be asymptotically justi.ed in the sense that it produces asymptotically correct size under the null of exogeneity, and it has unit power asymptotically. Our nonparametric test can be applied to the cases in which the outcome is generated by an additively non-separable structural relation or in which the outcome is discrete, which has not been studied in the literature.
Resumo:
Zero correlation between measurement error and model error has been assumed in existing panel data models dealing specifically with measurement error. We extend this literature and propose a simple model where one regressor is mismeasured, allowing the measurement error to correlate with model error. Zero correlation between measurement error and model error is a special case in our model where correlated measurement error equals zero. We ask two research questions. First, we wonder if the correlated measurement error can be identified in the context of panel data. Second, we wonder if classical instrumental variables in panel data need to be adjusted when correlation between measurement error and model error cannot be ignored. Under some regularity conditions the answer is yes to both questions. We then propose a two-step estimation corresponding to the two questions. The first step estimates correlated measurement error from a reverse regression; and the second step estimates usual coefficients of interest using adjusted instruments.