171 resultados para Linear regression method


Relevância:

90.00% 90.00%

Publicador:

Resumo:

The aim of the present study was to establish and compare the durations of the seminiferous epithelium cycles of the common shrew Sorex araneus, which is characterized by a high metabolic rate and multiple paternity, and the greater white-toothed shrew Crocidura russula, which is characterized by a low metabolic rate and a monogamous mating system. Twelve S. araneus males and fifteen C. russula males were injected intraperitoneally with 5-bromodeoxyuridine, and the testes were collected. For cycle length determinations, we applied the classical method of estimation and linear regression as a new method. With regard to variance, and even with a relatively small sample size, the new method seems to be more precise. In addition, the regression method allows the inference of information for every animal tested, enabling comparisons of different factors with cycle lengths. Our results show that not only increased testis size leads to increased sperm production, but it also reduces the duration of spermatogenesis. The calculated cycle lengths were 8.35 days for S. araneus and 12.12 days for C. russula. The data obtained in the present study provide the basis for future investigations into the effects of metabolic rate and mating systems on the speed of spermatogenesis.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

BACKGROUND: The association between smoking and total energy expenditure (TEE) is still controversial. We examined this association in a multi-country study where TEE was measured in a subset of participants by the doubly labeled water (DLW) method, the gold standard for this measurement. METHODS: This study includes 236 participants from five different African origin populations who underwent DLW measurements and had complete data on the main covariates of interest. Self-reported smoking status was categorized as either light (<7 cig/day) or high (≥7 cig/day). Lean body mass was assessed by deuterium dilution and physical activity (PA) by accelerometry. RESULTS: The prevalence of smoking was 55% in men and 16% in women with a median of 6.5 cigarettes/day. There was a trend toward lower BMI in smokers than non-smokers (not statistically significant). TEE was strongly correlated with fat-free mass (men: 0.70; women: 0.79) and with body weight (0.59 in both sexes). Using linear regression and adjusting for body weight, study site, age, PA, alcohol intake and occupation, TEE was larger in high smokers than in never smokers among men (difference of 298 kcal/day, p = 0.045) but not among women (162 kcal/day, p = 0.170). The association became slightly weaker in men (254 kcal/day, p = 0.058) and disappeared in women (-76 kcal/day, p = 0.380) when adjusting for fat-free mass instead of body weight. CONCLUSION: There was an association between smoking and TEE among men. However, the lack of an association among women, which may be partly related to the small number of smoking women, also suggests a role of unaccounted confounding factors.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Estimating the time since discharge of a spent cartridge or a firearm can be useful in criminal situa-tions involving firearms. The analysis of volatile gunshot residue remaining after shooting using solid-phase microextraction (SPME) followed by gas chromatography (GC) was proposed to meet this objective. However, current interpretative models suffer from several conceptual drawbacks which render them inadequate to assess the evidential value of a given measurement. This paper aims to fill this gap by proposing a logical approach based on the assessment of likelihood ratios. A probabilistic model was thus developed and applied to a hypothetical scenario where alternative hy-potheses about the discharge time of a spent cartridge found on a crime scene were forwarded. In order to estimate the parameters required to implement this solution, a non-linear regression model was proposed and applied to real published data. The proposed approach proved to be a valuable method for interpreting aging-related data.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Abstract : The human body is composed of a huge number of cells acting together in a concerted manner. The current understanding is that proteins perform most of the necessary activities in keeping a cell alive. The DNA, on the other hand, stores the information on how to produce the different proteins in the genome. Regulating gene transcription is the first important step that can thus affect the life of a cell, modify its functions and its responses to the environment. Regulation is a complex operation that involves specialized proteins, the transcription factors. Transcription factors (TFs) can bind to DNA and activate the processes leading to the expression of genes into new proteins. Errors in this process may lead to diseases. In particular, some transcription factors have been associated with a lethal pathological state, commonly known as cancer, associated with uncontrolled cellular proliferation, invasiveness of healthy tissues and abnormal responses to stimuli. Understanding cancer-related regulatory programs is a difficult task, often involving several TFs interacting together and influencing each other's activity. This Thesis presents new computational methodologies to study gene regulation. In addition we present applications of our methods to the understanding of cancer-related regulatory programs. The understanding of transcriptional regulation is a major challenge. We address this difficult question combining computational approaches with large collections of heterogeneous experimental data. In detail, we design signal processing tools to recover transcription factors binding sites on the DNA from genome-wide surveys like chromatin immunoprecipitation assays on tiling arrays (ChIP-chip). We then use the localization about the binding of TFs to explain expression levels of regulated genes. In this way we identify a regulatory synergy between two TFs, the oncogene C-MYC and SP1. C-MYC and SP1 bind preferentially at promoters and when SP1 binds next to C-NIYC on the DNA, the nearby gene is strongly expressed. The association between the two TFs at promoters is reflected by the binding sites conservation across mammals, by the permissive underlying chromatin states 'it represents an important control mechanism involved in cellular proliferation, thereby involved in cancer. Secondly, we identify the characteristics of TF estrogen receptor alpha (hERa) target genes and we study the influence of hERa in regulating transcription. hERa, upon hormone estrogen signaling, binds to DNA to regulate transcription of its targets in concert with its co-factors. To overcome the scarce experimental data about the binding sites of other TFs that may interact with hERa, we conduct in silico analysis of the sequences underlying the ChIP sites using the collection of position weight matrices (PWMs) of hERa partners, TFs FOXA1 and SP1. We combine ChIP-chip and ChIP-paired-end-diTags (ChIP-pet) data about hERa binding on DNA with the sequence information to explain gene expression levels in a large collection of cancer tissue samples and also on studies about the response of cells to estrogen. We confirm that hERa binding sites are distributed anywhere on the genome. However, we distinguish between binding sites near promoters and binding sites along the transcripts. The first group shows weak binding of hERa and high occurrence of SP1 motifs, in particular near estrogen responsive genes. The second group shows strong binding of hERa and significant correlation between the number of binding sites along a gene and the strength of gene induction in presence of estrogen. Some binding sites of the second group also show presence of FOXA1, but the role of this TF still needs to be investigated. Different mechanisms have been proposed to explain hERa-mediated induction of gene expression. Our work supports the model of hERa activating gene expression from distal binding sites by interacting with promoter bound TFs, like SP1. hERa has been associated with survival rates of breast cancer patients, though explanatory models are still incomplete: this result is important to better understand how hERa can control gene expression. Thirdly, we address the difficult question of regulatory network inference. We tackle this problem analyzing time-series of biological measurements such as quantification of mRNA levels or protein concentrations. Our approach uses the well-established penalized linear regression models where we impose sparseness on the connectivity of the regulatory network. We extend this method enforcing the coherence of the regulatory dependencies: a TF must coherently behave as an activator, or a repressor on all its targets. This requirement is implemented as constraints on the signs of the regressed coefficients in the penalized linear regression model. Our approach is better at reconstructing meaningful biological networks than previous methods based on penalized regression. The method is tested on the DREAM2 challenge of reconstructing a five-genes/TFs regulatory network obtaining the best performance in the "undirected signed excitatory" category. Thus, these bioinformatics methods, which are reliable, interpretable and fast enough to cover large biological dataset, have enabled us to better understand gene regulation in humans.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

OBJECTIVES: This study aimed at measuring the lipophilicity and ionization constants of diastereoisomeric dipeptides, interpreting them in terms of conformational behavior, and developing statistical models to predict them. METHODS: A series of 20 dipeptides of general structure NH(2) -L-X-(L or D)-His-OMe was designed and synthetized. Their experimental ionization constants (pK(1) , pK(2) and pK(3) ) and lipophilicity parameters (log P(N) and log D(7.4) ) were measured by potentiometry. Molecular modeling in three media (vacuum, water, and chloroform) was used to explore and sample their conformational space, and for each stored conformer to calculate their radius of gyration, virtual log P (preferably written as log P(MLP) , meaning obtained by the molecular lipophilicity potential (MLP) method) and polar surface area (PSA). Means and ranges were calculated for these properties, as was their sensitivity (i.e., the ratio between property range and number of rotatable bonds). RESULTS: Marked differences between diastereoisomers were seen in their experimental ionization constants and lipophilicity parameters. These differences are explained by molecular flexibility, configuration-dependent differences in intramolecular interactions, and accessibility of functional groups. Multiple linear equations correlated experimental lipophilicity parameters and ionization constants with PSA range and other calculated parameters. CONCLUSION: This study documents the differences in lipophilicity and ionization constants between diastereoisomeric dipeptides. Such configuration-dependent differences are shown to depend markedly on differences in conformational behavior and to be amenable to multiple linear regression. Chirality 24:566-576, 2012. © 2012 Wiley Periodicals, Inc.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Dose kernel convolution (DK) methods have been proposed to speed up absorbed dose calculations in molecular radionuclide therapy. Our aim was to evaluate the impact of tissue density heterogeneities (TDH) on dosimetry when using a DK method and to propose a simple density-correction method. METHODS: This study has been conducted on 3 clinical cases: case 1, non-Hodgkin lymphoma treated with (131)I-tositumomab; case 2, a neuroendocrine tumor treatment simulated with (177)Lu-peptides; and case 3, hepatocellular carcinoma treated with (90)Y-microspheres. Absorbed dose calculations were performed using a direct Monte Carlo approach accounting for TDH (3D-RD), and a DK approach (VoxelDose, or VD). For each individual voxel, the VD absorbed dose, D(VD), calculated assuming uniform density, was corrected for density, giving D(VDd). The average 3D-RD absorbed dose values, D(3DRD), were compared with D(VD) and D(VDd), using the relative difference Δ(VD/3DRD). At the voxel level, density-binned Δ(VD/3DRD) and Δ(VDd/3DRD) were plotted against ρ and fitted with a linear regression. RESULTS: The D(VD) calculations showed a good agreement with D(3DRD). Δ(VD/3DRD) was less than 3.5%, except for the tumor of case 1 (5.9%) and the renal cortex of case 2 (5.6%). At the voxel level, the Δ(VD/3DRD) range was 0%-14% for cases 1 and 2, and -3% to 7% for case 3. All 3 cases showed a linear relationship between voxel bin-averaged Δ(VD/3DRD) and density, ρ: case 1 (Δ = -0.56ρ + 0.62, R(2) = 0.93), case 2 (Δ = -0.91ρ + 0.96, R(2) = 0.99), and case 3 (Δ = -0.69ρ + 0.72, R(2) = 0.91). The density correction improved the agreement of the DK method with the Monte Carlo approach (Δ(VDd/3DRD) < 1.1%), but with a lesser extent for the tumor of case 1 (3.1%). At the voxel level, the Δ(VDd/3DRD) range decreased for the 3 clinical cases (case 1, -1% to 4%; case 2, -0.5% to 1.5%, and -1.5% to 2%). No more linear regression existed for cases 2 and 3, contrary to case 1 (Δ = 0.41ρ - 0.38, R(2) = 0.88) although the slope in case 1 was less pronounced. CONCLUSION: This study shows a small influence of TDH in the abdominal region for 3 representative clinical cases. A simple density-correction method was proposed and improved the comparison in the absorbed dose calculations when using our voxel S value implementation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper presents the general regression neural networks (GRNN) as a nonlinear regression method for the interpolation of monthly wind speeds in complex Alpine orography. GRNN is trained using data coming from Swiss meteorological networks to learn the statistical relationship between topographic features and wind speed. The terrain convexity, slope and exposure are considered by extracting features from the digital elevation model at different spatial scales using specialised convolution filters. A database of gridded monthly wind speeds is then constructed by applying GRNN in prediction mode during the period 1968-2008. This study demonstrates that using topographic features as inputs in GRNN significantly reduces cross-validation errors with respect to low-dimensional models integrating only geographical coordinates and terrain height for the interpolation of wind speed. The spatial predictability of wind speed is found to be lower in summer than in winter due to more complex and weaker wind-topography relationships. The relevance of these relationships is studied using an adaptive version of the GRNN algorithm which allows to select the useful terrain features by eliminating the noisy ones. This research provides a framework for extending the low-dimensional interpolation models to high-dimensional spaces by integrating additional features accounting for the topographic conditions at multiple spatial scales. Copyright (c) 2012 Royal Meteorological Society.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Molecular monitoring of BCR/ABL transcripts by real time quantitative reverse transcription PCR (qRT-PCR) is an essential technique for clinical management of patients with BCR/ABL-positive CML and ALL. Though quantitative BCR/ABL assays are performed in hundreds of laboratories worldwide, results among these laboratories cannot be reliably compared due to heterogeneity in test methods, data analysis, reporting, and lack of quantitative standards. Recent efforts towards standardization have been limited in scope. Aliquots of RNA were sent to clinical test centers worldwide in order to evaluate methods and reporting for e1a2, b2a2, and b3a2 transcript levels using their own qRT-PCR assays. Total RNA was isolated from tissue culture cells that expressed each of the different BCR/ABL transcripts. Serial log dilutions were prepared, ranging from 100 to 10-5, in RNA isolated from HL60 cells. Laboratories performed 5 independent qRT-PCR reactions for each sample type at each dilution. In addition, 15 qRT-PCR reactions of the 10-3 b3a2 RNA dilution were run to assess reproducibility within and between laboratories. Participants were asked to run the samples following their standard protocols and to report cycle threshold (Ct), quantitative values for BCR/ABL and housekeeping genes, and ratios of BCR/ABL to housekeeping genes for each sample RNA. Thirty-seven (n=37) participants have submitted qRT-PCR results for analysis (36, 37, and 34 labs generated data for b2a2, b3a2, and e1a2, respectively). The limit of detection for this study was defined as the lowest dilution that a Ct value could be detected for all 5 replicates. For b2a2, 15, 16, 4, and 1 lab(s) showed a limit of detection at the 10-5, 10-4, 10-3, and 10-2 dilutions, respectively. For b3a2, 20, 13, and 4 labs showed a limit of detection at the 10-5, 10-4, and 10-3 dilutions, respectively. For e1a2, 10, 21, 2, and 1 lab(s) showed a limit of detection at the 10-5, 10-4, 10-3, and 10-2 dilutions, respectively. Log %BCR/ABL ratio values provided a method for comparing results between the different laboratories for each BCR/ABL dilution series. Linear regression analysis revealed concordance among the majority of participant data over the 10-1 to 10-4 dilutions. The overall slope values showed comparable results among the majority of b2a2 (mean=0.939; median=0.9627; range (0.399 - 1.1872)), b3a2 (mean=0.925; median=0.922; range (0.625 - 1.140)), and e1a2 (mean=0.897; median=0.909; range (0.5174 - 1.138)) laboratory results (Fig. 1-3)). Thirty-four (n=34) out of the 37 laboratories reported Ct values for all 15 replicates and only those with a complete data set were included in the inter-lab calculations. Eleven laboratories either did not report their copy number data or used other reporting units such as nanograms or cell numbers; therefore, only 26 laboratories were included in the overall analysis of copy numbers. The median copy number was 348.4, with a range from 15.6 to 547,000 copies (approximately a 4.5 log difference); the median intra-lab %CV was 19.2% with a range from 4.2% to 82.6%. While our international performance evaluation using serially diluted RNA samples has reinforced the fact that heterogeneity exists among clinical laboratories, it has also demonstrated that performance within a laboratory is overall very consistent. Accordingly, the availability of defined BCR/ABL RNAs may facilitate the validation of all phases of quantitative BCR/ABL analysis and may be extremely useful as a tool for monitoring assay performance. Ongoing analyses of these materials, along with the development of additional control materials, may solidify consensus around their application in routine laboratory testing and possible integration in worldwide efforts to standardize quantitative BCR/ABL testing.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

BACKGROUND AND PURPOSE: Knowledge of cerebral blood flow (CBF) alterations in cases of acute stroke could be valuable in the early management of these cases. Among imaging techniques affording evaluation of cerebral perfusion, perfusion CT studies involve sequential acquisition of cerebral CT sections obtained in an axial mode during the IV administration of iodinated contrast material. They are thus very easy to perform in emergency settings. Perfusion CT values of CBF have proved to be accurate in animals, and perfusion CT affords plausible values in humans. The purpose of this study was to validate perfusion CT studies of CBF by comparison with the results provided by stable xenon CT, which have been reported to be accurate, and to evaluate acquisition and processing modalities of CT data, notably the possible deconvolution methods and the selection of the reference artery. METHODS: Twelve stable xenon CT and perfusion CT cerebral examinations were performed within an interval of a few minutes in patients with various cerebrovascular diseases. CBF maps were obtained from perfusion CT data by deconvolution using singular value decomposition and least mean square methods. The CBF were compared with the stable xenon CT results in multiple regions of interest through linear regression analysis and bilateral t tests for matched variables. RESULTS: Linear regression analysis showed good correlation between perfusion CT and stable xenon CT CBF values (singular value decomposition method: R(2) = 0.79, slope = 0.87; least mean square method: R(2) = 0.67, slope = 0.83). Bilateral t tests for matched variables did not identify a significant difference between the two imaging methods (P >.1). Both deconvolution methods were equivalent (P >.1). The choice of the reference artery is a major concern and has a strong influence on the final perfusion CT CBF map. CONCLUSION: Perfusion CT studies of CBF achieved with adequate acquisition parameters and processing lead to accurate and reliable results.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

BACKGROUND: After cardiac surgery with cardiopulmonary bypass (CPB), acquired coagulopathy often leads to post-CPB bleeding. Though multifactorial in origin, this coagulopathy is often aggravated by deficient fibrinogen levels. OBJECTIVE: To assess whether laboratory and thrombelastometric testing on CPB can predict plasma fibrinogen immediately after CPB weaning. PATIENTS / METHODS: This prospective study in 110 patients undergoing major cardiovascular surgery at risk of post-CPB bleeding compares fibrinogen level (Clauss method) and function (fibrin-specific thrombelastometry) in order to study the predictability of their course early after termination of CPB. Linear regression analysis and receiver operating characteristics were used to determine correlations and predictive accuracy. RESULTS: Quantitative estimation of post-CPB Clauss fibrinogen from on-CPB fibrinogen was feasible with small bias (+0.19 g/l), but with poor precision and a percentage of error >30%. A clinically useful alternative approach was developed by using on-CPB A10 to predict a Clauss fibrinogen range of interest instead of a discrete level. An on-CPB A10 ≤10 mm identified patients with a post-CPB Clauss fibrinogen of ≤1.5 g/l with a sensitivity of 0.99 and a positive predictive value of 0.60; it also identified those without a post-CPB Clauss fibrinogen <2.0 g/l with a specificity of 0.83. CONCLUSIONS: When measured on CPB prior to weaning, a FIBTEM A10 ≤10 mm is an early alert for post-CPB fibrinogen levels below or within the substitution range (1.5-2.0 g/l) recommended in case of post-CPB coagulopathic bleeding. This helps to minimize the delay to data-based hemostatic management after weaning from CPB.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

OBJECTIVE: Previous studies suggest that arginine vasopressin may have a role in metabolic syndrome (MetS) and diabetes by altering liver glycogenolysis, insulin, and glucagon secretion and pituitary ACTH release. We tested whether plasma copeptin, the stable C-terminal fragment of arginine vasopressin prohormone, was associated with insulin resistance and MetS in a Swiss population-based study. DESIGN AND METHOD: We analyzed data from the population-based Swiss Kidney Project on Genes in Hypertension. Copeptin was assessed by an immunoluminometric assay. Insulin resistance was derived from the HOMA model and calculated as follows: (FPI x FPG)/22.5, where FPI is fasting plasma insulin concentration (mU/L) and FPG fasting plasma glucose (mmol/L). Subjects were classified as having the MetS according to the National Cholesterol Education Program Adult Treatment Panel III criteria. Mixed multivariate linear regression models were built to explore the association of insulin resistance with copeptin. In addition, multivariate logistic regression models were built to explore the association between MetS and copeptin. In the two analyses, adjustment was done for age, gender, center, tobacco and alcohol consumption, socioeconomic status, physical activity, intake of fruits and vegetables and 24 h urine flow rate. Copeptin was log-transformed for the analyses. RESULTS: Among the 1,089 subjects included in this analysis, 47% were male. Mean (SD) age and body mass index were 47.4 (17.6) years 25.0 (4.5) kg/m2. The prevalence of MetS was 10.5%. HOMA-IR was higher in men (median 1.3, IQR 0.7-2.1) than in women (median 1.0, IQR 0.5-1.6,P < 0.0001). Plasma copeptin was higher in men (median 5.2, IQR 3.7-7.8 pmol/L) than in women (median 3.0, IQR 2.2-4.3 pmol/L), P < 0.0001. HOMA-IR was positively associated with log-copeptin after full adjustment (β (95% CI) 0.19 (0.09-0.29), P < 0.001). MetS was not associated with copeptin after full adjustment (P = 0.92). CONCLUSIONS: Insulin resistance, but not MetS, was associated with higher copeptin levels. Further studies should examine whether modifying pharmacologically the arginine vasopressin system might improve insulin resistance, thereby providing insight into the causal nature of this association.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Free induction decay (FID) navigators were found to qualitatively detect rigid-body head movements, yet it is unknown to what extent they can provide quantitative motion estimates. Here, we acquired FID navigators at different sampling rates and simultaneously measured head movements using a highly accurate optical motion tracking system. This strategy allowed us to estimate the accuracy and precision of FID navigators for quantification of rigid-body head movements. Five subjects were scanned with a 32-channel head coil array on a clinical 3T MR scanner during several resting and guided head movement periods. For each subject we trained a linear regression model based on FID navigator and optical motion tracking signals. FID-based motion model accuracy and precision was evaluated using cross-validation. FID-based prediction of rigid-body head motion was found to be with a mean translational and rotational error of 0.14±0.21 mm and 0.08±0.13(°) , respectively. Robust model training with sub-millimeter and sub-degree accuracy could be achieved using 100 data points with motion magnitudes of ±2 mm and ±1(°) for translation and rotation. The obtained linear models appeared to be subject-specific as inter-subject application of a "universal" FID-based motion model resulted in poor prediction accuracy. The results show that substantial rigid-body motion information is encoded in FID navigator signal time courses. Although, the applied method currently requires the simultaneous acquisition of FID signals and optical tracking data, the findings suggest that multi-channel FID navigators have a potential to complement existing tracking technologies for accurate rigid-body motion detection and correction in MRI.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

BACKGROUND: Obesity has been shown to be associated with depression and it has been suggested that higher body mass index (BMI) increases the risk of depression and other common mental disorders. However, the causal relationship remains unclear and Mendelian randomisation, a form of instrumental variable analysis, has recently been employed to attempt to resolve this issue. AIMS: To investigate whether higher BMI increases the risk of major depression. METHOD: Two instrumental variable analyses were conducted to test the causal relationship between obesity and major depression in RADIANT, a large case-control study of major depression. We used a single nucleotide polymorphism (SNP) in FTO and a genetic risk score (GRS) based on 32 SNPs with well-established associations with BMI. RESULTS: Linear regression analysis, as expected, showed that individuals carrying more risk alleles of FTO or having higher score of GRS had a higher BMI. Probit regression suggested that higher BMI is associated with increased risk of major depression. However, our two instrumental variable analyses did not support a causal relationship between higher BMI and major depression (FTO genotype: coefficient -0.03, 95% CI -0.18 to 0.13, P = 0.73; GRS: coefficient -0.02, 95% CI -0.11 to 0.07, P = 0.62). CONCLUSIONS: Our instrumental variable analyses did not support a causal relationship between higher BMI and major depression. The positive associations of higher BMI with major depression in probit regression analyses might be explained by reverse causality and/or residual confounding.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

INTRODUCTION: Occupational exposure to grain dust causes respiratory symptoms and pathologies. To decrease these effects, major changes have occurred in the grain processing industry in the last twenty years. However, there are no data on the effects of these changes on workers' respiratory health. OBJECTIVES: The aim of this study was to evaluate the respiratory health of grain workers and farmers involved in different steps of the processing industry of wheat, the most frequently used cereal in Europe, fifteen years after major improvements in collective protective equipment due to mechanisation. MATERIALS AND METHOD: Information on estimated personal exposure to wheat dust was collected from 87 workers exposed to wheat dust and from 62 controls. Lung function (FEV1, FVC, and PEF), exhaled nitrogen monoxide (FENO) and respiratory symptoms were assessed after the period of highest exposure to wheat during the year. Linear regression models were used to explore the associations between exposure indices and respiratory effects. RESULTS: Acute symptoms - cough, sneezing, runny nose, scratchy throat - were significantly more frequent in exposed workers than in controls. Increased mean exposure level, increased cumulative exposure and chronic exposure to more than 6 mg.m (-3) of inhaled wheat dust were significantly associated with decreased spirometric parameters, including FEV1 and PEF (40 ml and 123 ml.s (-1) ), FEV1 and FVC (0.4 ml and 0.5 ml per 100 h.mg.m (-3) ), FEV1 and FVC (20 ml and 20 ml per 100 h at >6 mg.m (-3) ). However, no increase in FENO was associated with increased exposure indices. CONCLUSIONS: The lung functions of wheat-related workers are still affected by their cumulative exposure to wheat dust, despite improvements in the use of collective protective equipment.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This chapter presents possible uses and examples of Monte Carlo methods for the evaluation of uncertainties in the field of radionuclide metrology. The method is already well documented in GUM supplement 1, but here we present a more restrictive approach, where the quantities of interest calculated by the Monte Carlo method are estimators of the expectation and standard deviation of the measurand, and the Monte Carlo method is used to propagate the uncertainties of the input parameters through the measurement model. This approach is illustrated by an example of the activity calibration of a 103Pd source by liquid scintillation counting and the calculation of a linear regression on experimental data points. An electronic supplement presents some algorithms which may be used to generate random numbers with various statistical distributions, for the implementation of this Monte Carlo calculation method.