906 resultados para Classical measurement error model
Resumo:
Résumé: L'évaluation de l'exposition aux nuisances professionnelles représente une étape importante dans l'analyse de poste de travail. Les mesures directes sont rarement utilisées sur les lieux même du travail et l'exposition est souvent estimée sur base de jugements d'experts. Il y a donc un besoin important de développer des outils simples et transparents, qui puissent aider les spécialistes en hygiène industrielle dans leur prise de décision quant aux niveaux d'exposition. L'objectif de cette recherche est de développer et d'améliorer les outils de modélisation destinés à prévoir l'exposition. Dans un premier temps, une enquête a été entreprise en Suisse parmi les hygiénistes du travail afin d'identifier les besoins (types des résultats, de modèles et de paramètres observables potentiels). Il a été constaté que les modèles d'exposition ne sont guère employés dans la pratique en Suisse, l'exposition étant principalement estimée sur la base de l'expérience de l'expert. De plus, l'émissions de polluants ainsi que leur dispersion autour de la source ont été considérés comme des paramètres fondamentaux. Pour tester la flexibilité et la précision des modèles d'exposition classiques, des expériences de modélisations ont été effectuées dans des situations concrètes. En particulier, des modèles prédictifs ont été utilisés pour évaluer l'exposition professionnelle au monoxyde de carbone et la comparer aux niveaux d'exposition répertoriés dans la littérature pour des situations similaires. De même, l'exposition aux sprays imperméabilisants a été appréciée dans le contexte d'une étude épidémiologique sur une cohorte suisse. Dans ce cas, certains expériences ont été entreprises pour caractériser le taux de d'émission des sprays imperméabilisants. Ensuite un modèle classique à deux-zone a été employé pour évaluer la dispersion d'aérosol dans le champ proche et lointain pendant l'activité de sprayage. D'autres expériences ont également été effectuées pour acquérir une meilleure compréhension des processus d'émission et de dispersion d'un traceur, en se concentrant sur la caractérisation de l'exposition du champ proche. Un design expérimental a été développé pour effectuer des mesures simultanées dans plusieurs points d'une cabine d'exposition, par des instruments à lecture directe. Il a été constaté que d'un point de vue statistique, la théorie basée sur les compartiments est sensée, bien que l'attribution à un compartiment donné ne pourrait pas se faire sur la base des simples considérations géométriques. Dans une étape suivante, des données expérimentales ont été collectées sur la base des observations faites dans environ 100 lieux de travail différents: des informations sur les déterminants observés ont été associées aux mesures d'exposition des informations sur les déterminants observés ont été associé. Ces différentes données ont été employées pour améliorer le modèle d'exposition à deux zones. Un outil a donc été développé pour inclure des déterminants spécifiques dans le choix du compartiment, renforçant ainsi la fiabilité des prévisions. Toutes ces investigations ont servi à améliorer notre compréhension des outils des modélisations ainsi que leurs limitations. L'intégration de déterminants mieux adaptés aux besoins des experts devrait les inciter à employer cet outil dans leur pratique. D'ailleurs, en augmentant la qualité des outils des modélisations, cette recherche permettra non seulement d'encourager leur utilisation systématique, mais elle pourra également améliorer l'évaluation de l'exposition basée sur les jugements d'experts et, par conséquent, la protection de la santé des travailleurs. Abstract Occupational exposure assessment is an important stage in the management of chemical exposures. Few direct measurements are carried out in workplaces, and exposures are often estimated based on expert judgements. There is therefore a major requirement for simple transparent tools to help occupational health specialists to define exposure levels. The aim of the present research is to develop and improve modelling tools in order to predict exposure levels. In a first step a survey was made among professionals to define their expectations about modelling tools (what types of results, models and potential observable parameters). It was found that models are rarely used in Switzerland and that exposures are mainly estimated from past experiences of the expert. Moreover chemical emissions and their dispersion near the source have also been considered as key parameters. Experimental and modelling studies were also performed in some specific cases in order to test the flexibility and drawbacks of existing tools. In particular, models were applied to assess professional exposure to CO for different situations and compared with the exposure levels found in the literature for similar situations. Further, exposure to waterproofing sprays was studied as part of an epidemiological study on a Swiss cohort. In this case, some laboratory investigation have been undertaken to characterize the waterproofing overspray emission rate. A classical two-zone model was used to assess the aerosol dispersion in the near and far field during spraying. Experiments were also carried out to better understand the processes of emission and dispersion for tracer compounds, focusing on the characterization of near field exposure. An experimental set-up has been developed to perform simultaneous measurements through direct reading instruments in several points. It was mainly found that from a statistical point of view, the compartmental theory makes sense but the attribution to a given compartment could ñó~be done by simple geometric consideration. In a further step the experimental data were completed by observations made in about 100 different workplaces, including exposure measurements and observation of predefined determinants. The various data obtained have been used to improve an existing twocompartment exposure model. A tool was developed to include specific determinants in the choice of the compartment, thus largely improving the reliability of the predictions. All these investigations helped improving our understanding of modelling tools and identify their limitations. The integration of more accessible determinants, which are in accordance with experts needs, may indeed enhance model application for field practice. Moreover, while increasing the quality of modelling tool, this research will not only encourage their systematic use, but might also improve the conditions in which the expert judgments take place, and therefore the workers `health protection.
Resumo:
Introduction: Ethylglucuronide (EtG) is a direct and specific metabolite of ethanol. Its determination in hair is of increasing interest for detecting and monitoring alcohol abuse. The quantification of EtG in hair requires analytical methods showing highest sensitivity and specificity. We present a fully validated method based on gas chromatography-negative chemical ionization tandem mass spectrometry (GC-NCI-MS/MS). The method was validated using French Society of Pharmaceutical Sciences and Techniques (SFSTP) guidelines which are based on the determination of the total measurement error and accuracy profiles. Methods: Washed and powdered hair is extracted in water using an ultrasonic incubation. After purification by Oasis MAX solid phase extraction, the derivatized EtG is detected and quantified by GC-NCI-MS/MS method in the selected reaction monitoring mode. The transitions m/z 347 / 163 and m/z 347 / 119 were used for the quantification and identification of EtG. Four quality controls (QC) prepared with hair samples taken post mortem from 2 subjects with a known history of alcoholism were used. A proficiency test with 7 participating laboratories was first run to validate the EtG concentration of each QC sample. Considering the results of this test, these samples were then used as internal controls for validation of the method. Results: The mean EtG concentrations measured in the 4 QC were 259.4, 130.4, 40.8, and 8.4 pg/mg hair. Method validation has shown linearity between 8.4 and 259.4 pg/mg hair (r2 > 0.999). The lower limit of quantification was set up at 8.4 pg/mg. Repeatability and intermediate precision were found less than 13.2% for all concentrations tested. Conclusion: The method proved to be suitable for routine analysis of EtG in hair. GC-NCI-MS/MS method was then successfully applied to the analysis of EtG in hair samples collected from different alcohol consumers.
Resumo:
BACKGROUND Recently, some US cohorts have shown a moderate association between red and processed meat consumption and mortality supporting the results of previous studies among vegetarians. The aim of this study was to examine the association of red meat, processed meat, and poultry consumption with the risk of early death in the European Prospective Investigation into Cancer and Nutrition (EPIC). METHODS Included in the analysis were 448,568 men and women without prevalent cancer, stroke, or myocardial infarction, and with complete information on diet, smoking, physical activity and body mass index, who were between 35 and 69 years old at baseline. Cox proportional hazards regression was used to examine the association of meat consumption with all-cause and cause-specific mortality. RESULTS As of June 2009, 26,344 deaths were observed. After multivariate adjustment, a high consumption of red meat was related to higher all-cause mortality (hazard ratio (HR) = 1.14, 95% confidence interval (CI) 1.01 to 1.28, 160+ versus 10 to 19.9 g/day), and the association was stronger for processed meat (HR = 1.44, 95% CI 1.24 to 1.66, 160+ versus 10 to 19.9 g/day). After correction for measurement error, higher all-cause mortality remained significant only for processed meat (HR = 1.18, 95% CI 1.11 to 1.25, per 50 g/d). We estimated that 3.3% (95% CI 1.5% to 5.0%) of deaths could be prevented if all participants had a processed meat consumption of less than 20 g/day. Significant associations with processed meat intake were observed for cardiovascular diseases, cancer, and 'other causes of death'. The consumption of poultry was not related to all-cause mortality. CONCLUSIONS The results of our analysis support a moderate positive association between processed meat consumption and mortality, in particular due to cardiovascular diseases, but also to cancer.
Resumo:
We want to shed some light on the development of person mobility by analysing the repeated cross-sectional data of the four National Travel Surveys (NTS) that were conducted in Germany since the mid seventies. The above mentioned driving forces operate on different levels of the system that generates the spatial behaviour we observe: Travel demand is derived from the needs and desires of individuals to participate in spatially separated activities. Individuals organise their lives in an interactive process within the context they live in, using given infrastructure. Essential determinants of their demand are the individual's socio-demographic characteristics, but also the opportunities and constraints defined by the household and the environment are relevant for the behaviour which ultimately can be realised. In order to fully capture the context which determines individual behaviour, the (nested) hierarchy of persons within households within spatial settings has to be considered. The data we will use for our analysis contains information on these three levels. With the analysis of this micro-data we attempt to improve our understanding of the afore subsumed macro developments. In addition we will investigate the prediction power of a few classic sociodemographic variables for the daily travel distance of individuals in the four NTS data sets, with a focus on the evolution of this predictive power. The additional task to correctly measure distances travelled by means of the NTS is threatened by the fact that although these surveys measure the same variables, different sampling designs and data collection procedures were used. So the aim of the analysis is also to detect variables whose control corrects for the known measurement error, as a prerequisite to apply appropriate models in order to better understand the development of individual travel behaviour in a multilevel context. This task is complicated by the fact that variables that inform on survey procedures and outcomes are only provided with the data set for 2002 (see Infas and DIW Berlin, 2003).
Resumo:
Blood pressure (BP) is a heritable, quantitative trait with intraindividual variability and susceptibility to measurement error. Genetic studies of BP generally use single-visit measurements and thus cannot remove variability occurring over months or years. We leveraged the idea that averaging BP measured across time would improve phenotypic accuracy and thereby increase statistical power to detect genetic associations. We studied systolic BP (SBP), diastolic BP (DBP), mean arterial pressure (MAP), and pulse pressure (PP) averaged over multiple years in 46,629 individuals of European ancestry. We identified 39 trait-variant associations across 19 independent loci (p < 5 × 10(-8)); five associations (in four loci) uniquely identified by our LTA analyses included those of SBP and MAP at 2p23 (rs1275988, near KCNK3), DBP at 2q11.2 (rs7599598, in FER1L5), and PP at 6p21 (rs10948071, near CRIP3) and 7p13 (rs2949837, near IGFBP3). Replication analyses conducted in cohorts with single-visit BP data showed positive replication of associations and a nominal association (p < 0.05). We estimated a 20% gain in statistical power with long-term average (LTA) as compared to single-visit BP association studies. Using LTA analysis, we identified genetic loci influencing BP. LTA might be one way of increasing the power of genetic associations for continuous traits in extant samples for other phenotypes that are measured serially over time.
Resumo:
Moisture sensitivity of Hot Mix Asphalt (HMA) mixtures, generally called stripping, is a major form of distress in asphalt concrete pavement. It is characterized by the loss of adhesive bond between the asphalt binder and the aggregate (a failure of the bonding of the binder to the aggregate) or by a softening of the cohesive bonds within the asphalt binder (a failure within the binder itself), both of which are due to the action of loading under traffic in the presence of moisture. The evaluation of HMA moisture sensitivity has been divided into two categories: visual inspection test and mechanical test. However, most of them have been developed in pre-Superpave mix design. This research was undertaken to develop a protocol for evaluating the moisture sensitivity potential of HMA mixtures using the Nottingham Asphalt Tester (NAT). The mechanisms of HMA moisture sensitivity were reviewed and the test protocols using the NAT were developed. Different types of blends as moisture-sensitive groups and non-moisture-sensitive groups were used to evaluate the potential of the proposed test. The test results were analyzed with three parameters based on performance character: the retained flow number depending on critical permanent deformation failure (RFNP), the retained flow number depending on cohesion failure (RFNC), and energy ratio (ER). Analysis based on energy ratio of elastic strain (EREE ) at flow number of cohesion failure (FNC) has higher potential to evaluate the HMA moisture sensitivity than other parameters. If the measurement error in data-acquisition process is removed, analyses based on RFNP and RFNC would also have high potential to evaluate the HMA moisture sensitivity. The vacuum pressure saturation used in AASHTO T 283 and proposed test has a risk to damage specimen before the load applying.
Resumo:
Returns to scale to capital and the strength of capital externalities play a key role for the empirical predictions and policy implications of different growth theories. We show that both can be identified with individual wage data and implement our approach at the city-level using US Census data on individuals in 173 cities for 1970, 1980, and 1990. Estimation takes into account fixed effects, endogeneity of capital accumulation, and measurement error. We find no evidence for human or physical capital externalities and decreasing aggregate returns to capital. Returns to scale to physical and human capital are around 80 percent. We also find strong complementarities between human capital and labor and substantial total employment externalities.
Resumo:
Does financial development result in capital being reallocated more rapidly to industries where it is most productive? We argue that if this was the case, financially developed countries should see faster growth in industries with investment opportunities due to global demand and productivity shifts. Testing this cross-industry cross-country growth implication requires proxies for (latent) global industry investment opportunities. We show that tests relying only on data from specific (benchmark) countries may yield spurious evidence for or against the hypothesis. We therefore develop an alternative approach that combines benchmark-country proxies with a proxy that does not reflect opportunities specific to a country or level of financial development. Our empirical results yield clear support for the capital reallocation hypothesis.
Resumo:
Returns to scale to capital and the strength of capital externalities play a key role for the empirical predictions and policy implications of different growth theories. We show that both can be identified with individual wage data and implement our approach at the city-level using US Census data on individuals in 173 cities for 1970, 1980, and 1990. Estimation takes into account fixed effects, endogeneity of capital accumulation, and measurement error. We find no evidence for human or physical capital externalities and decreasing aggregate returns to capital. Returns to scale to physical and human capital are around 80 percent. We also find strong complementarities between human capital and labor and substantial total employment externalities.
Resumo:
This paper demonstrates that, unlike what the conventional wisdom says, measurement error biases in panel data estimation of convergence using OLS with fixed effects are huge, not trivial. It does so by way of the "skipping estimation"': taking data from every m years of the sample (where m is an integer greater than or equal to 2), as opposed to every single year. It is shown that the estimated speed of convergence from the OLS with fixed effects is biased upwards by as much as 7 to 15%.
Resumo:
This paper presents a classical Cournot oligopoly model with some peculiar features: it is non--quasi--competitive as price under N-poly is greater than monopoly price; Cournot equilibrium exists and is unique with each new entry; the successive equilibria after new entries are stable under the adjustment mechanism that assumes that actual output of each seller is adjusted proportionally to the difference between actual output and profit maximizing output. Moreover, the model tends to perfect competition as N goes to infinity, reaching the monopoly price again.
Resumo:
Does financial development result in capital being reallocated more rapidly to industries where it is most productive? We argue that if this was the case, financially developed countries should see faster growth in industries with investment opportunities due to global demand and productivity shifts. Testing this cross-industry cross-country growth implication requires proxies for (latent) global industry investment opportunities. We show that tests relying only on data from specific (benchmark) countries may yield spurious evidence for or against the hypothesis. We therefore develop an alternative approach that combines benchmark-country proxies with a proxy that does not reflect opportunities specific to a country or level of financial development. Our empirical results yield clear support for the capital reallocation hypothesis.
Resumo:
Kirton's Adaption-Innovation Inventory (KAI) is a widely-used measure of "cognitive style." Surprisingly, there is very little research investigating the discriminant and incremental validity of the KAI. In two studies (n = 213), we examined whether (a) we could predict KAI scores with the "big five" personality dimensions and (b) the KAI scores predicted leadership behavior when controlling for personality and ability. Correcting for measurement error, we found that KAI scores were predicted mostly by personality and gender (multiple R = 0.82). KAI scores did not predict variance in leadership while controlling for established predictors. Our findings add to recent literature that questions the uniqueness and utility of cognitive style or similar "style" constructs; researchers using such measures must control for the big five factors and correct for measurement error to avoid confounded interpretations.
Resumo:
Using Monte Carlo simulations and reanalyzing the data of a validation study of the AEIM emotional intelligence test, we demonstrated that an atheoretical approach and the use of weak statistical procedures can result in biased validity estimates. These procedures included stepwise regression-and the general case of failing to include important theoretical controls-extreme scores analysis, and ignoring heteroscedasticity as well as measurement error. The authors of the AEIM test responded by offering more complete information about their analyses, allowing us to further examine the perils of ignoring theory and correct statistical procedures. In this paper we show with extended analyses that the AEIM test is invalid.
Resumo:
The OLS estimator of the intergenerational earnings correlation is biased towards zero, while the instrumental variables estimator is biased upwards. The first of these results arises because of measurement error, while the latter rests on the presumption that the education of the parent family is an invalid instrument. We propose a panel data framework for quantifying the asymptotic biases of these estimators, as well as a mis-specification test for the IV estimator. [Author]