845 resultados para large-sample


Relevância:

60.00% 60.00%

Publicador:

Resumo:

The purpose of this paper is to analyse the issues related to home bias and foreign direct investments (FDIs). We study the role of physical, cultural, and institutional distances from home on FDI decisions taken by corporations to assess whether the globalization of the past two decades has reduced their influence. Using the ‘home bias’ framework from the finance literature and the gravity model from the economics literature, we utilize a large sample of both developed and emerging markets, using FDI flows of 6263 unique bilateral country pairs over a 30-year period. We find strong empirical evidence of persistent home bias in FDI outflows, and we show that not only physical distance but also cultural and institutional similarities between host and source countries remain a decisive factor in foreign corporate investment decisions. We also show that such home bias is persistent over time and is observed around the world.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

For two reasons, our capacity for systematic comparison of innovative participatory democratic processes remains limited. First, the category of participatory democratic innovations remains relatively vague when compared to more traditional democratic institutions and practices. Second, until recently there existed no large-sample databases that captured relevant variables in the practice of democratic innovation. The lone exception to these patterns is the Participedia database, located online. Participedia is well placed to respond to the two obstacles to systematic comparative research on democratic innovation. First, its crowdsourced data collection strategy means that many of the cases on the platform are not well known and have not been the subject of sustained academic analysis. Second, the data captured in the articles provides the basis for systematic comparative analysis of democratic innovations both within type (e.g., participatory budgeting, mini-publics) and across types. The platform allows for systematic content analysis of text descriptions and/or statistical analysis of the datasets generated from the structured data fields. This article describes the data about innovative participatory democratic processes available from Participedia, and furnishes examples of the kinds of quantitative and qualitative insights about those processes that Participedia enables.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Purpose: Surgery remains the treatment of choice for localized renal neoplasms. While radical nephrectomy was long considered the gold standard, partial nephrectomy has equivalent oncological results for small tumors. The role of negative surgical margins continues to be debated. Intraoperative frozen section analysis is expensive and time-consuming. We assessed the feasibility of intraoperative ex vivo ultrasound of resection margins in patients undergoing partial nephrectomy and its correlation with margin status on definitive pathological evaluation.Materials and Methods: A study was done at 2 institutions from February 2008 to March 2011. Patients undergoing partial nephrectomy for T1-T2 renal tumors were included in analysis. Partial nephrectomy was done by a standardized minimal healthy tissue margin technique. After resection the specimen was kept in saline and tumor margin status was immediately determined by ex vivo ultrasound. Sequential images were obtained to evaluate the whole tumor pseudocapsule. Results were compared with margin status on definitive pathological evaluation.Results: A total of 19 men and 14 women with a mean +/- SD age of 62 +/- 11 years were included in analysis. Intraoperative ex vivo ultrasound revealed negative surgical margins in 30 cases and positive margins in 2 while it could not be done in 1. Final pathological results revealed negative margins in all except 1 case. Ultrasound sensitivity and specificity were 100% and 97%, respectively. Median ultrasound duration was 1 minute. Mean tumor and margin size was 3.6 +/- 2.2 cm and 1.5 +/- 0.7 mm, respectively.Conclusions: Intraoperative ex vivo ultrasound of resection margins in patients undergoing partial nephrectomy is feasible and efficient. Large sample studies are needed to confirm its promising accuracy to determine margin status.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The diagnosis of inflammatory bowel disease (IBD), comprising Crohn's disease (CD) and ulcerative colitis (UC), continues to present difficulties due to unspecific symptoms and limited test accuracies. We aimed to determine the diagnostic delay (time from first symptoms to IBD diagnosis) and to identify associated risk factors. A total of 1591 IBD patients (932 CD, 625 UC, 34 indeterminate colitis) from the Swiss IBD cohort study (SIBDCS) were evaluated. The SIBDCS collects data on a large sample of IBD patients from hospitals and private practice across Switzerland through physician and patient questionnaires. The primary outcome measure was diagnostic delay. Diagnostic delay in CD patients was significantly longer compared to UC patients (median 9 versus 4 months, P < 0.001). Seventy-five percent of CD patients were diagnosed within 24 months compared to 12 months for UC and 6 months for IC patients. Multivariate logistic regression identified age <40 years at diagnosis (odds ratio [OR] 2.15, P = 0.010) and ileal disease (OR 1.69, P = 0.025) as independent risk factors for long diagnostic delay in CD (>24 months). In UC patients, nonsteroidal antiinflammatory drug (NSAID intake (OR 1.75, P = 0.093) and male gender (OR 0.59, P = 0.079) were associated with long diagnostic delay (>12 months). Whereas the median delay for diagnosing CD, UC, and IC seems to be acceptable, there exists a long delay in a considerable proportion of CD patients. More public awareness work needs to be done in order to reduce patient and doctor delays in this target population.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

BACKGROUND: The alcohol purchase task (APT), which presents a scenario and asks participants how many drinks they would purchase and consume at different prices, has been used among students and small clinical samples to obtain measures of alcohol demand but not in large, general population samples. METHODS: We administered the APT to a large sample of young men from the general population (Cohort Study on Substance Use Risk Factors). Participants who reported drinking in the past year (n=4790), reported on past 12 months alcohol use, on DSM-5 alcohol use disorder (AUD) criteria and on alcohol related consequences were included. RESULTS: Among the APT's demand parameters, intensity was 8.7 (SD=6.5) indicating that, when drinks are free, participants report a planned consumption of almost 9 drinks. The maximum alcohol expenditure (Omax) was over 35CHF (1CHF=1.1USD) and the demand became elastic (Pmax) at 8.4CHF (SD=5.6). The mean price at which the consumption was suppressed was 15.6CHF (SD=5.4). Exponential equation provided a satisfactory fit to individual responses (mean R(2): 0.8, median: 0.8). Demand intensity was correlated with alcohol use, number of AUD criteria and number of consequences (all r≥0.3, p<0.0001). Omax was correlated with alcohol use (p<0.0001). The elasticity parameter was weakly correlated with alcohol use in the expected direction. CONCLUSION: The APT measures are useful in characterizing demand for alcohol in young men in the general population. Demand may provide a clinically useful index of strength of motivation for alcohol use in general population samples.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The purpose of this study was to assess the cross-cultural validity of the Marlowe-Crowne Social Desirability scale short form C, in a large sample of French-speaking participants from eight African countries and Switzerland. Exploratory and confirmatory analyses suggested retaining a two-factor structure. Item bias detection according to country was conducted for all 13 items and effect was calculated with R2. For the two-factor solution, 9 items were associated with a negligible effect size, 3 items with a moderate one, and 1 item with a large one. A series of analyses of covariance considering the acquiescence variable as a covariate showed that the acquiescence tendency does not contribute to the bias at item level. This research indicates that the psychometric properties of this instrument do not reach a scalar equivalence but that a culturally reliable measurement of social desirability could be developed.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The ability to monitor and evaluate the consequences of ongoing behaviors and coordinate behavioral adjustments seems to rely on networks including the anterior cingulate cortex (ACC) and phasic changes in dopamine activity. Activity (and presumably functional maturation) of the ACC may be indirectly measured using the error-related negativity (ERN), an event-related potential (ERP) component that is hypothesized to reflect activity of the automatic response monitoring system. To date, no studies have examined the measurement reliability of the ERN as a trait-like measure of response monitoring, its development in mid- and late- adolescence as well as its relation to risk-taking and empathic ability, two traits linked to dopaminergic and ACC activity. Utilizing a large sample of 15- and 18-year-old males, the present study examined the test-retest reliability of the ERN, age-related changes in the ERN and other components of the ERP associated with error monitoring (the Pe and CRN), and the relations of the error-related ERP components to personality traits of risk propensity and empathy. Results indicated good test-retest reliability of the ERN providing important validation of the ERN as a stable and possibly trait-like electrophysiological correlate of performance monitoring. Ofthe three components, only the ERN was of greater amplitude for the older adolescents suggesting that its ACC network is functionally late to mature, due to either structural or neurochemical changes with age. Finally, the ERN was smaller for those with high risk propensity and low empathy, while other components associated with error monitoring were not, which suggests that poor ACe function may be associated with the desire to engage in risky behaviors and the ERN may be influenced by the extent of individuals' concern with the outcome of events.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The purposes of this study were: a) to examine the prevalence and consequences associated with adolescent gambling, b) to examine the factors which influence adolescent gambling,. c) to detennine what factors discriminate among four groups of gamblers (no-risk/non-gamblers, low-risk gamblers, at-risk gamblers, and high-risk/problematic gamblers), and d) to examine the relation of gambling to nine other risk behaviours (i.e., alcohol use, smoking, marijuana use, hard drug use, sexual activity, minor delinquency, major delinquency, direct aggression, and indirect aggression). Adolescents (N = 3,767) from 25 secondary schools completed a twohour survey that assessed involvement in risk be~aviours as well as potential predictors from a wide range of contexts (school, neighbourhood, family, peer, and intrapersonal). The majority of adolescents reported gambling, although the frequency of gambling participation was low. The strongest predictors/discriminators of gambling involvement were gender, unstructured activities, structured activities, and risk attitudes/perceptions. In addition, the examination of the co-occurrence of gambling with other risk behaviours revealed that for high-risk/problem gamblers, the top three most frequent co-occurring high-risk behaviours were direct aggression, minor delinquency and alcohol. This study was the first to examine the continuum of gambling involvement (i.e., non-gambling to high risk/problematic gambling) using a comprehensive set ofpotential predictors with a large sample of secondary school students. The findings of this study support past research and theories (e.g., Theory of Triadic Influence) which suggest the importance ofproximal variables in predicting risk behaviors. The next step, however, will be to examine the direct and indirect 1 effects of the ultimate (e.g., temperament), distal (e.g., parental relationship), and proximal variables (e.g., risk attitudes/perceptions) on gambling involvement in a longitudinal study.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Nonsuicidal self-injury (NSSI), which refers to the direct and deliberate destruction of bodily tissue in the absence of suicidal intent, is a serious and widespread mental health concern. Although NSSI has been differentiated from suicidal behavior on the basis of non-lethal intent, research has shown that these two behaviors commonly co-occur. Despite increased research on the link between NSSI and suicidal behavior, however, little attention has been given as to why these two behaviors are associated. My doctoral dissertation specifically addressed this gap in the literature by examining the link between NSSI and several measures of suicidal risk (e.g., suicidal ideation, suicidal attempts, pain tolerance) among a large sample of young adults. The primary goal of my doctoral research was to identify individuals who engaged in NSSI at risk for suicidal ideation and attempts, in an effort to elucidate the processes through which psychosocial risk, NSSI, and suicidal risk may be associated. Participants were drawn from a larger sample of 1153 undergraduate students (70.3% female) at a mid-sized Canadian University. In study one, I examined whether increases in psychosocial risk and suicidal ideation were associated with changes in NSSI engagement over a one year period. Analyses revealed that beginners, relapsed injurers, and persistent injurers were differentiated from recovered injurers and desisters by increases in psychsocial risk and suicidal ideation over time. In study two, I examined whether several NSSI characteristics (e.g., frequency, number of methods) were associated with suicidal risk using latent class analysis. Three subgroups of individuals were identified: 1) an infrequent NSSI/not high risk for suicidal behavior group, 2) a frequent NSSI/not high risk for suicidal behavior group, and 3) a frequent NSSI/high risk for suicidal behavior group. Follow-up analyses indicated that individuals in the frequent NSSI/high risk for suicidal behavior group met the clinical cutoff score for high suicidal risk and reported significantly greater levels of suicidal ideation, attempts, and risk for future suicidal behavior as compared to the other two classes. Class 3 was also differentiated by higher levels of psychosocial risk (e.g., depressive symptoms, social anxiety) relative to the other two classes, as well as a comparison group of non-injuring young adults. Finally, in study three, I examined whether NSSI was associated with pain tolerance in a lab-based task, as tolerance to pain has been shown to be a strong predictor of suicidal risk. Individuals who engaged in NSSI to regulate the need to self-punish, tolerated pain longer than individuals who engaged in NSSI but not to self-punish and a non-injuring comparison group. My findings offer new insight into the associations among psychosocial risk, NSSI, and suicidal risk, and can serve to inform intervention efforts aimed at individuals at high risk for suicidal behavior. More specifically, my findings provide clinicians with several NSSI-specific risk factors (e.g., frequent self-injury, self-injuring alone, self-injuring to self-punish) that may serve as important markers of suicidal risk among individuals engaging in NSSI.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper analyzes the dynamics of wages and workers' mobility within firms with a hierarchical structure of job levels. The theoretical model proposed by Gibbons and Waldman (1999), that combines the notions of human capital accumulation, job rank assignments based on comparative advantage and learning about workers' abilities, is implemented empirically to measure the importance of these elements in explaining the wage policy of firms. Survey data from the GSOEP (German Socio-Economic Panel) are used to draw conclusions on the common features characterizing the wage policy of firms from a large sample of firms. The GSOEP survey also provides information on the worker's rank within his firm which is usually not available in other surveys. The results are consistent with non-random selection of workers onto the rungs of a job ladder. There is no direct evidence of learning about workers' unobserved abilities but the analysis reveals that unmeasured ability is an important factor driving wage dynamics. Finally, job rank effects remain significant even after controlling for measured and unmeasured characteristics.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This note investigates the adequacy of the finite-sample approximation provided by the Functional Central Limit Theorem (FCLT) when the errors are allowed to be dependent. We compare the distribution of the scaled partial sums of some data with the distribution of the Wiener process to which it converges. Our setup is purposely very simple in that it considers data generated from an ARMA(1,1) process. Yet, this is sufficient to bring out interesting conclusions about the particular elements which cause the approximations to be inadequate in even quite large sample sizes.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Conditional heteroskedasticity is an important feature of many macroeconomic and financial time series. Standard residual-based bootstrap procedures for dynamic regression models treat the regression error as i.i.d. These procedures are invalid in the presence of conditional heteroskedasticity. We establish the asymptotic validity of three easy-to-implement alternative bootstrap proposals for stationary autoregressive processes with m.d.s. errors subject to possible conditional heteroskedasticity of unknown form. These proposals are the fixed-design wild bootstrap, the recursive-design wild bootstrap and the pairwise bootstrap. In a simulation study all three procedures tend to be more accurate in small samples than the conventional large-sample approximation based on robust standard errors. In contrast, standard residual-based bootstrap methods for models with i.i.d. errors may be very inaccurate if the i.i.d. assumption is violated. We conclude that in many empirical applications the proposed robust bootstrap procedures should routinely replace conventional bootstrap procedures for autoregressions based on the i.i.d. error assumption.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper, we propose exact inference procedures for asset pricing models that can be formulated in the framework of a multivariate linear regression (CAPM), allowing for stable error distributions. The normality assumption on the distribution of stock returns is usually rejected in empirical studies, due to excess kurtosis and asymmetry. To model such data, we propose a comprehensive statistical approach which allows for alternative - possibly asymmetric - heavy tailed distributions without the use of large-sample approximations. The methods suggested are based on Monte Carlo test techniques. Goodness-of-fit tests are formally incorporated to ensure that the error distributions considered are empirically sustainable, from which exact confidence sets for the unknown tail area and asymmetry parameters of the stable error distribution are derived. Tests for the efficiency of the market portfolio (zero intercepts) which explicitly allow for the presence of (unknown) nuisance parameter in the stable error distribution are derived. The methods proposed are applied to monthly returns on 12 portfolios of the New York Stock Exchange over the period 1926-1995 (5 year subperiods). We find that stable possibly skewed distributions provide statistically significant improvement in goodness-of-fit and lead to fewer rejections of the efficiency hypothesis.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Thèse par articles. Articles (4) annexés à la thèse en fichiers complémentaires.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

La dernière décennie a connu un intérêt croissant pour les problèmes posés par les variables instrumentales faibles dans la littérature économétrique, c’est-à-dire les situations où les variables instrumentales sont faiblement corrélées avec la variable à instrumenter. En effet, il est bien connu que lorsque les instruments sont faibles, les distributions des statistiques de Student, de Wald, du ratio de vraisemblance et du multiplicateur de Lagrange ne sont plus standard et dépendent souvent de paramètres de nuisance. Plusieurs études empiriques portant notamment sur les modèles de rendements à l’éducation [Angrist et Krueger (1991, 1995), Angrist et al. (1999), Bound et al. (1995), Dufour et Taamouti (2007)] et d’évaluation des actifs financiers (C-CAPM) [Hansen et Singleton (1982,1983), Stock et Wright (2000)], où les variables instrumentales sont faiblement corrélées avec la variable à instrumenter, ont montré que l’utilisation de ces statistiques conduit souvent à des résultats peu fiables. Un remède à ce problème est l’utilisation de tests robustes à l’identification [Anderson et Rubin (1949), Moreira (2002), Kleibergen (2003), Dufour et Taamouti (2007)]. Cependant, il n’existe aucune littérature économétrique sur la qualité des procédures robustes à l’identification lorsque les instruments disponibles sont endogènes ou à la fois endogènes et faibles. Cela soulève la question de savoir ce qui arrive aux procédures d’inférence robustes à l’identification lorsque certaines variables instrumentales supposées exogènes ne le sont pas effectivement. Plus précisément, qu’arrive-t-il si une variable instrumentale invalide est ajoutée à un ensemble d’instruments valides? Ces procédures se comportent-elles différemment? Et si l’endogénéité des variables instrumentales pose des difficultés majeures à l’inférence statistique, peut-on proposer des procédures de tests qui sélectionnent les instruments lorsqu’ils sont à la fois forts et valides? Est-il possible de proposer les proédures de sélection d’instruments qui demeurent valides même en présence d’identification faible? Cette thèse se focalise sur les modèles structurels (modèles à équations simultanées) et apporte des réponses à ces questions à travers quatre essais. Le premier essai est publié dans Journal of Statistical Planning and Inference 138 (2008) 2649 – 2661. Dans cet essai, nous analysons les effets de l’endogénéité des instruments sur deux statistiques de test robustes à l’identification: la statistique d’Anderson et Rubin (AR, 1949) et la statistique de Kleibergen (K, 2003), avec ou sans instruments faibles. D’abord, lorsque le paramètre qui contrôle l’endogénéité des instruments est fixe (ne dépend pas de la taille de l’échantillon), nous montrons que toutes ces procédures sont en général convergentes contre la présence d’instruments invalides (c’est-à-dire détectent la présence d’instruments invalides) indépendamment de leur qualité (forts ou faibles). Nous décrivons aussi des cas où cette convergence peut ne pas tenir, mais la distribution asymptotique est modifiée d’une manière qui pourrait conduire à des distorsions de niveau même pour de grands échantillons. Ceci inclut, en particulier, les cas où l’estimateur des double moindres carrés demeure convergent, mais les tests sont asymptotiquement invalides. Ensuite, lorsque les instruments sont localement exogènes (c’est-à-dire le paramètre d’endogénéité converge vers zéro lorsque la taille de l’échantillon augmente), nous montrons que ces tests convergent vers des distributions chi-carré non centrées, que les instruments soient forts ou faibles. Nous caractérisons aussi les situations où le paramètre de non centralité est nul et la distribution asymptotique des statistiques demeure la même que dans le cas des instruments valides (malgré la présence des instruments invalides). Le deuxième essai étudie l’impact des instruments faibles sur les tests de spécification du type Durbin-Wu-Hausman (DWH) ainsi que le test de Revankar et Hartley (1973). Nous proposons une analyse en petit et grand échantillon de la distribution de ces tests sous l’hypothèse nulle (niveau) et l’alternative (puissance), incluant les cas où l’identification est déficiente ou faible (instruments faibles). Notre analyse en petit échantillon founit plusieurs perspectives ainsi que des extensions des précédentes procédures. En effet, la caractérisation de la distribution de ces statistiques en petit échantillon permet la construction des tests de Monte Carlo exacts pour l’exogénéité même avec les erreurs non Gaussiens. Nous montrons que ces tests sont typiquement robustes aux intruments faibles (le niveau est contrôlé). De plus, nous fournissons une caractérisation de la puissance des tests, qui exhibe clairement les facteurs qui déterminent la puissance. Nous montrons que les tests n’ont pas de puissance lorsque tous les instruments sont faibles [similaire à Guggenberger(2008)]. Cependant, la puissance existe tant qu’au moins un seul instruments est fort. La conclusion de Guggenberger (2008) concerne le cas où tous les instruments sont faibles (un cas d’intérêt mineur en pratique). Notre théorie asymptotique sous les hypothèses affaiblies confirme la théorie en échantillon fini. Par ailleurs, nous présentons une analyse de Monte Carlo indiquant que: (1) l’estimateur des moindres carrés ordinaires est plus efficace que celui des doubles moindres carrés lorsque les instruments sont faibles et l’endogenéité modérée [conclusion similaire à celle de Kiviet and Niemczyk (2007)]; (2) les estimateurs pré-test basés sur les tests d’exogenété ont une excellente performance par rapport aux doubles moindres carrés. Ceci suggère que la méthode des variables instrumentales ne devrait être appliquée que si l’on a la certitude d’avoir des instruments forts. Donc, les conclusions de Guggenberger (2008) sont mitigées et pourraient être trompeuses. Nous illustrons nos résultats théoriques à travers des expériences de simulation et deux applications empiriques: la relation entre le taux d’ouverture et la croissance économique et le problème bien connu du rendement à l’éducation. Le troisième essai étend le test d’exogénéité du type Wald proposé par Dufour (1987) aux cas où les erreurs de la régression ont une distribution non-normale. Nous proposons une nouvelle version du précédent test qui est valide même en présence d’erreurs non-Gaussiens. Contrairement aux procédures de test d’exogénéité usuelles (tests de Durbin-Wu-Hausman et de Rvankar- Hartley), le test de Wald permet de résoudre un problème courant dans les travaux empiriques qui consiste à tester l’exogénéité partielle d’un sous ensemble de variables. Nous proposons deux nouveaux estimateurs pré-test basés sur le test de Wald qui performent mieux (en terme d’erreur quadratique moyenne) que l’estimateur IV usuel lorsque les variables instrumentales sont faibles et l’endogénéité modérée. Nous montrons également que ce test peut servir de procédure de sélection de variables instrumentales. Nous illustrons les résultats théoriques par deux applications empiriques: le modèle bien connu d’équation du salaire [Angist et Krueger (1991, 1999)] et les rendements d’échelle [Nerlove (1963)]. Nos résultats suggèrent que l’éducation de la mère expliquerait le décrochage de son fils, que l’output est une variable endogène dans l’estimation du coût de la firme et que le prix du fuel en est un instrument valide pour l’output. Le quatrième essai résout deux problèmes très importants dans la littérature économétrique. D’abord, bien que le test de Wald initial ou étendu permette de construire les régions de confiance et de tester les restrictions linéaires sur les covariances, il suppose que les paramètres du modèle sont identifiés. Lorsque l’identification est faible (instruments faiblement corrélés avec la variable à instrumenter), ce test n’est en général plus valide. Cet essai développe une procédure d’inférence robuste à l’identification (instruments faibles) qui permet de construire des régions de confiance pour la matrices de covariances entre les erreurs de la régression et les variables explicatives (possiblement endogènes). Nous fournissons les expressions analytiques des régions de confiance et caractérisons les conditions nécessaires et suffisantes sous lesquelles ils sont bornés. La procédure proposée demeure valide même pour de petits échantillons et elle est aussi asymptotiquement robuste à l’hétéroscédasticité et l’autocorrélation des erreurs. Ensuite, les résultats sont utilisés pour développer les tests d’exogénéité partielle robustes à l’identification. Les simulations Monte Carlo indiquent que ces tests contrôlent le niveau et ont de la puissance même si les instruments sont faibles. Ceci nous permet de proposer une procédure valide de sélection de variables instrumentales même s’il y a un problème d’identification. La procédure de sélection des instruments est basée sur deux nouveaux estimateurs pré-test qui combinent l’estimateur IV usuel et les estimateurs IV partiels. Nos simulations montrent que: (1) tout comme l’estimateur des moindres carrés ordinaires, les estimateurs IV partiels sont plus efficaces que l’estimateur IV usuel lorsque les instruments sont faibles et l’endogénéité modérée; (2) les estimateurs pré-test ont globalement une excellente performance comparés à l’estimateur IV usuel. Nous illustrons nos résultats théoriques par deux applications empiriques: la relation entre le taux d’ouverture et la croissance économique et le modèle de rendements à l’éducation. Dans la première application, les études antérieures ont conclu que les instruments n’étaient pas trop faibles [Dufour et Taamouti (2007)] alors qu’ils le sont fortement dans la seconde [Bound (1995), Doko et Dufour (2009)]. Conformément à nos résultats théoriques, nous trouvons les régions de confiance non bornées pour la covariance dans le cas où les instruments sont assez faibles.