86 resultados para Classical measurement error model


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Approximate models (proxies) can be employed to reduce the computational costs of estimating uncertainty. The price to pay is that the approximations introduced by the proxy model can lead to a biased estimation. To avoid this problem and ensure a reliable uncertainty quantification, we propose to combine functional data analysis and machine learning to build error models that allow us to obtain an accurate prediction of the exact response without solving the exact model for all realizations. We build the relationship between proxy and exact model on a learning set of geostatistical realizations for which both exact and approximate solvers are run. Functional principal components analysis (FPCA) is used to investigate the variability in the two sets of curves and reduce the dimensionality of the problem while maximizing the retained information. Once obtained, the error model can be used to predict the exact response of any realization on the basis of the sole proxy response. This methodology is purpose-oriented as the error model is constructed directly for the quantity of interest, rather than for the state of the system. Also, the dimensionality reduction performed by FPCA allows a diagnostic of the quality of the error model to assess the informativeness of the learning set and the fidelity of the proxy to the exact model. The possibility of obtaining a prediction of the exact response for any newly generated realization suggests that the methodology can be effectively used beyond the context of uncertainty quantification, in particular for Bayesian inference and optimization.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Résumé: L'évaluation de l'exposition aux nuisances professionnelles représente une étape importante dans l'analyse de poste de travail. Les mesures directes sont rarement utilisées sur les lieux même du travail et l'exposition est souvent estimée sur base de jugements d'experts. Il y a donc un besoin important de développer des outils simples et transparents, qui puissent aider les spécialistes en hygiène industrielle dans leur prise de décision quant aux niveaux d'exposition. L'objectif de cette recherche est de développer et d'améliorer les outils de modélisation destinés à prévoir l'exposition. Dans un premier temps, une enquête a été entreprise en Suisse parmi les hygiénistes du travail afin d'identifier les besoins (types des résultats, de modèles et de paramètres observables potentiels). Il a été constaté que les modèles d'exposition ne sont guère employés dans la pratique en Suisse, l'exposition étant principalement estimée sur la base de l'expérience de l'expert. De plus, l'émissions de polluants ainsi que leur dispersion autour de la source ont été considérés comme des paramètres fondamentaux. Pour tester la flexibilité et la précision des modèles d'exposition classiques, des expériences de modélisations ont été effectuées dans des situations concrètes. En particulier, des modèles prédictifs ont été utilisés pour évaluer l'exposition professionnelle au monoxyde de carbone et la comparer aux niveaux d'exposition répertoriés dans la littérature pour des situations similaires. De même, l'exposition aux sprays imperméabilisants a été appréciée dans le contexte d'une étude épidémiologique sur une cohorte suisse. Dans ce cas, certains expériences ont été entreprises pour caractériser le taux de d'émission des sprays imperméabilisants. Ensuite un modèle classique à deux-zone a été employé pour évaluer la dispersion d'aérosol dans le champ proche et lointain pendant l'activité de sprayage. D'autres expériences ont également été effectuées pour acquérir une meilleure compréhension des processus d'émission et de dispersion d'un traceur, en se concentrant sur la caractérisation de l'exposition du champ proche. Un design expérimental a été développé pour effectuer des mesures simultanées dans plusieurs points d'une cabine d'exposition, par des instruments à lecture directe. Il a été constaté que d'un point de vue statistique, la théorie basée sur les compartiments est sensée, bien que l'attribution à un compartiment donné ne pourrait pas se faire sur la base des simples considérations géométriques. Dans une étape suivante, des données expérimentales ont été collectées sur la base des observations faites dans environ 100 lieux de travail différents: des informations sur les déterminants observés ont été associées aux mesures d'exposition des informations sur les déterminants observés ont été associé. Ces différentes données ont été employées pour améliorer le modèle d'exposition à deux zones. Un outil a donc été développé pour inclure des déterminants spécifiques dans le choix du compartiment, renforçant ainsi la fiabilité des prévisions. Toutes ces investigations ont servi à améliorer notre compréhension des outils des modélisations ainsi que leurs limitations. L'intégration de déterminants mieux adaptés aux besoins des experts devrait les inciter à employer cet outil dans leur pratique. D'ailleurs, en augmentant la qualité des outils des modélisations, cette recherche permettra non seulement d'encourager leur utilisation systématique, mais elle pourra également améliorer l'évaluation de l'exposition basée sur les jugements d'experts et, par conséquent, la protection de la santé des travailleurs. Abstract Occupational exposure assessment is an important stage in the management of chemical exposures. Few direct measurements are carried out in workplaces, and exposures are often estimated based on expert judgements. There is therefore a major requirement for simple transparent tools to help occupational health specialists to define exposure levels. The aim of the present research is to develop and improve modelling tools in order to predict exposure levels. In a first step a survey was made among professionals to define their expectations about modelling tools (what types of results, models and potential observable parameters). It was found that models are rarely used in Switzerland and that exposures are mainly estimated from past experiences of the expert. Moreover chemical emissions and their dispersion near the source have also been considered as key parameters. Experimental and modelling studies were also performed in some specific cases in order to test the flexibility and drawbacks of existing tools. In particular, models were applied to assess professional exposure to CO for different situations and compared with the exposure levels found in the literature for similar situations. Further, exposure to waterproofing sprays was studied as part of an epidemiological study on a Swiss cohort. In this case, some laboratory investigation have been undertaken to characterize the waterproofing overspray emission rate. A classical two-zone model was used to assess the aerosol dispersion in the near and far field during spraying. Experiments were also carried out to better understand the processes of emission and dispersion for tracer compounds, focusing on the characterization of near field exposure. An experimental set-up has been developed to perform simultaneous measurements through direct reading instruments in several points. It was mainly found that from a statistical point of view, the compartmental theory makes sense but the attribution to a given compartment could ñó~be done by simple geometric consideration. In a further step the experimental data were completed by observations made in about 100 different workplaces, including exposure measurements and observation of predefined determinants. The various data obtained have been used to improve an existing twocompartment exposure model. A tool was developed to include specific determinants in the choice of the compartment, thus largely improving the reliability of the predictions. All these investigations helped improving our understanding of modelling tools and identify their limitations. The integration of more accessible determinants, which are in accordance with experts needs, may indeed enhance model application for field practice. Moreover, while increasing the quality of modelling tool, this research will not only encourage their systematic use, but might also improve the conditions in which the expert judgments take place, and therefore the workers `health protection.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Introduction: Ethylglucuronide (EtG) is a direct and specific metabolite of ethanol. Its determination in hair is of increasing interest for detecting and monitoring alcohol abuse. The quantification of EtG in hair requires analytical methods showing highest sensitivity and specificity. We present a fully validated method based on gas chromatography-negative chemical ionization tandem mass spectrometry (GC-NCI-MS/MS). The method was validated using French Society of Pharmaceutical Sciences and Techniques (SFSTP) guidelines which are based on the determination of the total measurement error and accuracy profiles. Methods: Washed and powdered hair is extracted in water using an ultrasonic incubation. After purification by Oasis MAX solid phase extraction, the derivatized EtG is detected and quantified by GC-NCI-MS/MS method in the selected reaction monitoring mode. The transitions m/z 347 / 163 and m/z 347 / 119 were used for the quantification and identification of EtG. Four quality controls (QC) prepared with hair samples taken post mortem from 2 subjects with a known history of alcoholism were used. A proficiency test with 7 participating laboratories was first run to validate the EtG concentration of each QC sample. Considering the results of this test, these samples were then used as internal controls for validation of the method. Results: The mean EtG concentrations measured in the 4 QC were 259.4, 130.4, 40.8, and 8.4 pg/mg hair. Method validation has shown linearity between 8.4 and 259.4 pg/mg hair (r2 > 0.999). The lower limit of quantification was set up at 8.4 pg/mg. Repeatability and intermediate precision were found less than 13.2% for all concentrations tested. Conclusion: The method proved to be suitable for routine analysis of EtG in hair. GC-NCI-MS/MS method was then successfully applied to the analysis of EtG in hair samples collected from different alcohol consumers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We want to shed some light on the development of person mobility by analysing the repeated cross-sectional data of the four National Travel Surveys (NTS) that were conducted in Germany since the mid seventies. The above mentioned driving forces operate on different levels of the system that generates the spatial behaviour we observe: Travel demand is derived from the needs and desires of individuals to participate in spatially separated activities. Individuals organise their lives in an interactive process within the context they live in, using given infrastructure. Essential determinants of their demand are the individual's socio-demographic characteristics, but also the opportunities and constraints defined by the household and the environment are relevant for the behaviour which ultimately can be realised. In order to fully capture the context which determines individual behaviour, the (nested) hierarchy of persons within households within spatial settings has to be considered. The data we will use for our analysis contains information on these three levels. With the analysis of this micro-data we attempt to improve our understanding of the afore subsumed macro developments. In addition we will investigate the prediction power of a few classic sociodemographic variables for the daily travel distance of individuals in the four NTS data sets, with a focus on the evolution of this predictive power. The additional task to correctly measure distances travelled by means of the NTS is threatened by the fact that although these surveys measure the same variables, different sampling designs and data collection procedures were used. So the aim of the analysis is also to detect variables whose control corrects for the known measurement error, as a prerequisite to apply appropriate models in order to better understand the development of individual travel behaviour in a multilevel context. This task is complicated by the fact that variables that inform on survey procedures and outcomes are only provided with the data set for 2002 (see Infas and DIW Berlin, 2003).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Blood pressure (BP) is a heritable, quantitative trait with intraindividual variability and susceptibility to measurement error. Genetic studies of BP generally use single-visit measurements and thus cannot remove variability occurring over months or years. We leveraged the idea that averaging BP measured across time would improve phenotypic accuracy and thereby increase statistical power to detect genetic associations. We studied systolic BP (SBP), diastolic BP (DBP), mean arterial pressure (MAP), and pulse pressure (PP) averaged over multiple years in 46,629 individuals of European ancestry. We identified 39 trait-variant associations across 19 independent loci (p < 5 × 10(-8)); five associations (in four loci) uniquely identified by our LTA analyses included those of SBP and MAP at 2p23 (rs1275988, near KCNK3), DBP at 2q11.2 (rs7599598, in FER1L5), and PP at 6p21 (rs10948071, near CRIP3) and 7p13 (rs2949837, near IGFBP3). Replication analyses conducted in cohorts with single-visit BP data showed positive replication of associations and a nominal association (p < 0.05). We estimated a 20% gain in statistical power with long-term average (LTA) as compared to single-visit BP association studies. Using LTA analysis, we identified genetic loci influencing BP. LTA might be one way of increasing the power of genetic associations for continuous traits in extant samples for other phenotypes that are measured serially over time.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Kirton's Adaption-Innovation Inventory (KAI) is a widely-used measure of "cognitive style." Surprisingly, there is very little research investigating the discriminant and incremental validity of the KAI. In two studies (n = 213), we examined whether (a) we could predict KAI scores with the "big five" personality dimensions and (b) the KAI scores predicted leadership behavior when controlling for personality and ability. Correcting for measurement error, we found that KAI scores were predicted mostly by personality and gender (multiple R = 0.82). KAI scores did not predict variance in leadership while controlling for established predictors. Our findings add to recent literature that questions the uniqueness and utility of cognitive style or similar "style" constructs; researchers using such measures must control for the big five factors and correct for measurement error to avoid confounded interpretations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Using Monte Carlo simulations and reanalyzing the data of a validation study of the AEIM emotional intelligence test, we demonstrated that an atheoretical approach and the use of weak statistical procedures can result in biased validity estimates. These procedures included stepwise regression-and the general case of failing to include important theoretical controls-extreme scores analysis, and ignoring heteroscedasticity as well as measurement error. The authors of the AEIM test responded by offering more complete information about their analyses, allowing us to further examine the perils of ignoring theory and correct statistical procedures. In this paper we show with extended analyses that the AEIM test is invalid.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The OLS estimator of the intergenerational earnings correlation is biased towards zero, while the instrumental variables estimator is biased upwards. The first of these results arises because of measurement error, while the latter rests on the presumption that the education of the parent family is an invalid instrument. We propose a panel data framework for quantifying the asymptotic biases of these estimators, as well as a mis-specification test for the IV estimator. [Author]

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we propose a highly accurate approximation procedure for ruin probabilities in the classical collective risk model, which is based on a quadrature/rational approximation procedure proposed in [2]. For a certain class of claim size distributions (which contains the completely monotone distributions) we give a theoretical justification for the method. We also show that under weaker assumptions on the claim size distribution, the method may still perform reasonably well in some cases. This in particular provides an efficient alternative to a related method proposed in [3]. A number of numerical illustrations for the performance of this procedure is provided for both completely monotone and other types of random variables.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Robust estimators for accelerated failure time models with asymmetric (or symmetric) error distribution and censored observations are proposed. It is assumed that the error model belongs to a log-location-scale family of distributions and that the mean response is the parameter of interest. Since scale is a main component of mean, scale is not treated as a nuisance parameter. A three steps procedure is proposed. In the first step, an initial high breakdown point S estimate is computed. In the second step, observations that are unlikely under the estimated model are rejected or down weighted. Finally, a weighted maximum likelihood estimate is computed. To define the estimates, functions of censored residuals are replaced by their estimated conditional expectation given that the response is larger than the observed censored value. The rejection rule in the second step is based on an adaptive cut-off that, asymptotically, does not reject any observation when the data are generat ed according to the model. Therefore, the final estimate attains full efficiency at the model, with respect to the maximum likelihood estimate, while maintaining the breakdown point of the initial estimator. Asymptotic results are provided. The new procedure is evaluated with the help of Monte Carlo simulations. Two examples with real data are discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Adrenoceptors are prototypic members of the superfamily of seven transmembrane domain, G protein-coupled receptors. Study of the properties of several mutationally activated adrenoceptors is deepening understanding of the normal functioning of this ubiquitous class of receptors. The new findings suggest an expansion of the classical ternary complex model of receptor action to include an explicit isomerization of the receptors from an inactive to an active state which couples to the G protein ('allosteric ternary complex model'). This isomerization involves conformational changes which may occur spontaneously, or be induced by agonists or appropriate mutations which abrogate the normal 'constraining' function of the receptor, allowing it to 'relax' into the active conformation. Robert Lefkowitz and colleagues discuss the physiological and pathophysiological implications of these new insights into regulation of receptor activity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Waterproofing agents are widely used to protect leather and textiles in both domestic and occupational activities. An outbreak of acute respiratory syndrome following exposure to waterproofing sprays occurred during the winter 2002-2003 in Switzerland. About 180 cases were reported by the Swiss Toxicological Information Centre between October 2002 and March 2003, whereas fewer than 10 cases per year had been recorded previously. The reported cases involved three brands of sprays containing a common waterproofing mixture, that had undergone a formulation change in the months preceding the outbreak. A retrospective analysis was undertaken in collaboration with the Swiss Toxicological Information Centre and the Swiss Registries for Interstitial and Orphan Lung Diseases to clarify the circumstances and possible causes of the observed health effects. Individual exposure data were generated with questionnaires and experimental emission measurements. The collected data was used to conduct numeric simulation for 102 cases of exposure. A classical two-zone model was used to assess the aerosol dispersion in the near- and far-field during spraying. The resulting assessed dose and exposure levels obtained were spread on large scales, of several orders of magnitude. No dose-response relationship was found between exposure indicators and health effects indicators (perceived severity and clinical indicators). Weak relationships were found between unspecific inflammatory response indicators (leukocytes, C-reactive protein) and the maximal exposure concentration. The results obtained disclose a high interindividual response variability and suggest that some indirect mechanism(s) predominates in the respiratory disease occurrence. Furthermore, no threshold could be found to define a safe level of exposure. These findings suggest that the improvement of environmental exposure conditions during spraying alone does not constitute a sufficient measure to prevent future outbreaks of waterproofing spray toxicity. More efficient preventive measures are needed prior to the marketing and distribution of new waterproofing agents.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Data characteristics and species traits are expected to influence the accuracy with which species' distributions can be modeled and predicted. We compare 10 modeling techniques in terms of predictive power and sensitivity to location error, change in map resolution, and sample size, and assess whether some species traits can explain variation in model performance. We focused on 30 native tree species in Switzerland and used presence-only data to model current distribution, which we evaluated against independent presence-absence data. While there are important differences between the predictive performance of modeling methods, the variance in model performance is greater among species than among techniques. Within the range of data perturbations in this study, some extrinsic parameters of data affect model performance more than others: location error and sample size reduced performance of many techniques, whereas grain had little effect on most techniques. No technique can rescue species that are difficult to predict. The predictive power of species-distribution models can partly be predicted from a series of species characteristics and traits based on growth rate, elevational distribution range, and maximum elevation. Slow-growing species or species with narrow and specialized niches tend to be better modeled. The Swiss presence-only tree data produce models that are reliable enough to be useful in planning and management applications.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Indirect calorimetry based on respiratory exchange measurement has been successfully used from the beginning of the century to obtain an estimate of heat production (energy expenditure) in human subjects and animals. The errors inherent to this classical technique can stem from various sources: 1) model of calculation and assumptions, 2) calorimetric factors used, 3) technical factors and 4) human factors. The physiological and biochemical factors influencing the interpretation of calorimetric data include a change in the size of the bicarbonate and urea pools and the accumulation or loss (via breath, urine or sweat) of intermediary metabolites (gluconeogenesis, ketogenesis). More recently, respiratory gas exchange data have been used to estimate substrate utilization rates in various physiological and metabolic situations (fasting, post-prandial state, etc.). It should be recalled that indirect calorimetry provides an index of overall substrate disappearance rates. This is incorrectly assumed to be equivalent to substrate "oxidation" rates. Unfortunately, there is no adequate golden standard to validate whole body substrate "oxidation" rates, and this contrasts to the "validation" of heat production by indirect calorimetry, through use of direct calorimetry under strict thermal equilibrium conditions. Tracer techniques using stable (or radioactive) isotopes, represent an independent way of assessing substrate utilization rates. When carbohydrate metabolism is measured with both techniques, indirect calorimetry generally provides consistent glucose "oxidation" rates as compared to isotopic tracers, but only when certain metabolic processes (such as gluconeogenesis and lipogenesis) are minimal or / and when the respiratory quotients are not at the extreme of the physiological range. However, it is believed that the tracer techniques underestimate true glucose "oxidation" rates due to the failure to account for glycogenolysis in the tissue storing glucose, since this escapes the systemic circulation. A major advantage of isotopic techniques is that they are able to estimate (given certain assumptions) various metabolic processes (such as gluconeogenesis) in a noninvasive way. Furthermore when, in addition to the 3 macronutrients, a fourth substrate is administered (such as ethanol), isotopic quantification of substrate "oxidation" allows one to eliminate the inherent assumptions made by indirect calorimetry. In conclusion, isotopic tracers techniques and indirect calorimetry should be considered as complementary techniques, in particular since the tracer techniques require the measurement of carbon dioxide production obtained by indirect calorimetry. However, it should be kept in mind that the assessment of substrate oxidation by indirect calorimetry may involve large errors in particular over a short period of time. By indirect calorimetry, energy expenditure (heat production) is calculated with substantially less error than substrate oxidation rates.