892 resultados para sampling bias
Resumo:
BACKGROUND: Cost effective means of assessing the levels of risk factors in the population have to be defined in order to monitor these factors over time and across populations. This study is aimed at analyzing the difference in population estimates of the mean levels of body mass index (BMI) and the prevalences of overweight, between health examination survey and telephone survey. METHODS: The study compares the results of two health surveys, one by telephone (N=820) and the other by physical examination (N=1318). The two surveys, based on independent random samples of the population, were carried out over the same period (1992-1993) in the same population (canton of Vaud, Switzerland). RESULTS: Overall participation rates were 67% and 53% for the health interview survey (HIS) and the health examination survey (HES) respectively. In the HIS, the reporting rate was over 98% for weight and height values. Self-reported weight was on average lower than measured weight, by 2.2 kg in men and 3.5 kg in women, while self-reported height was on average greater than measured height, by 1.2 cm in men and 1.9 cm in women. As a result, in comparison to HES, HIS led to substantially lower mean levels of BMI, and to a reduction of the prevalence rates of obesity (BMI>30 kg/m(2)) by more than a half. These differences are larger for women than for men. CONCLUSION: The two surveys were based on different sampling procedures. However, this difference in design is unlikely to explain the systematic bias observed between self-reported and measured values for height and weight. This bias entails the overall validity of BMI assessment from telephone surveys.
Resumo:
"Vegeu el resum a l'inici del document del fitxer adjunt."
Resumo:
The purpose of this study was to prospectively compare free-breathing navigator-gated cardiac-triggered three-dimensional steady-state free precession (SSFP) spin-labeling coronary magnetic resonance (MR) angiography performed by using Cartesian k-space sampling with that performed by using radial k-space sampling. A new dedicated placement of the two-dimensional selective labeling pulse and an individually adjusted labeling delay time approved by the institutional review board were used. In 14 volunteers (eight men, six women; mean age, 28.8 years) who gave informed consent, signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), vessel sharpness, vessel length, and subjective image quality were investigated. Differences between groups were analyzed with nonparametric tests (Wilcoxon, Pearson chi2). Radial imaging, as compared with Cartesian imaging, resulted in a significant reduction in the severity of motion artifacts, as well as an increase in SNR (26.9 vs 12.0, P < .05) in the coronary arteries and CNR (23.1 vs 8.8, P < .05) between the coronary arteries and the myocardium. A tendency toward improved vessel sharpness and vessel length was also found with radial imaging. Radial SSFP imaging is a promising technique for spin-labeling coronary MR angiography.
Resumo:
Objective Biomonitoring of solvents using the unchanged substance in urine as exposure indicator is still relatively scarce due to some discrepancies between the results reported in the literature. Based on the assessment of toluene exposure, the aim of this work was to evaluate the effects of some steps likely to bias the results and to measure urinary toluene both in volunteers experimentally exposed and in workers of rotogravure factories. Methods Static headspace was used for toluene analysis. o-Cresol was also measured for comparison. Urine collection, storage and conservation conditions were studied to evaluate possible loss or contamination of toluene in controlled situations applied to six volunteers in an exposure chamber according to four scenarios with exposure at stable levels from 10 to 50 ppm. Kinetics of elimination of toluene were determined over 24 h. A field study was then carried out in a total of 29 workers from two rotogravure printing facilities. Results Potential contamination during urine collection in the field is confirmed to be a real problem but technical precautions for sampling, storage and analysis can be easily followed to control the situation. In the volunteers at rest, urinary toluene showed a rapid increase after 2 h with a steady level after about 3 h. At 47.1 ppm the mean cumulated excretion was about 0.005% of the amount of the toluene ventilated. Correlation between the toluene levels in air and in end of exposure urinary sample was excellent (r = 0.965). In the field study, the median personal exposure to toluene was 32 ppm (range 3.6-148). According to the correlations between environmental and biological monitoring data, the post-shift urinary toluene (r = 0.921) and o-cresol (r = 0.873) concentrations were, respectively, 75.6 mu g/l and 0.76 mg/g creatinine for 50 ppm toluene personal exposure. The corresponding urinary toluene concentration before the next shift was 11 mu g/l (r = 0.883). Conclusion Urinary toluene was shown once more time a very interesting surrogate to o-cresol and could be recommended as a biomarker of choice for solvent exposure. [Authors]
Resumo:
Solid phase microextraction (SPME) has been widely used for many years in various applications, such as environmental and water samples, food and fragrance analysis, or biological fluids. The aim of this study was to suggest the SPME method as an alternative to conventional techniques used in the evaluation of worker exposure to benzene, toluene, ethylbenzene, and xylene (BTEX). Polymethylsiloxane-carboxen (PDMS/CAR) showed as the most effective stationary phase material for sorbing BTEX among other materials (polyacrylate, PDMS, PDMS/divinylbenzene, Carbowax/divinylbenzene). Various experimental conditions were studied to apply SPME to BTEX quantitation in field situations. The uptake rate of the selected fiber (75 microm PDMS/CAR) was determined for each analyte at various concentrations, relative humidities, and airflow velocities from static (calm air) to dynamic (> 200 cm/s) conditions. The SPME method also was compared with the National Institute of Occupational Safety and Health method 1501. Unlike the latter, the SPME approach fulfills the new requirement for the threshold limit value-short term exposure limit (TLV-STEL) of 2.5 ppm for benzene (8 mg/m(3))
Resumo:
Despite the central role of quantitative PCR (qPCR) in the quantification of mRNA transcripts, most analyses of qPCR data are still delegated to the software that comes with the qPCR apparatus. This is especially true for the handling of the fluorescence baseline. This article shows that baseline estimation errors are directly reflected in the observed PCR efficiency values and are thus propagated exponentially in the estimated starting concentrations as well as 'fold-difference' results. Because of the unknown origin and kinetics of the baseline fluorescence, the fluorescence values monitored in the initial cycles of the PCR reaction cannot be used to estimate a useful baseline value. An algorithm that estimates the baseline by reconstructing the log-linear phase downward from the early plateau phase of the PCR reaction was developed and shown to lead to very reproducible PCR efficiency values. PCR efficiency values were determined per sample by fitting a regression line to a subset of data points in the log-linear phase. The variability, as well as the bias, in qPCR results was significantly reduced when the mean of these PCR efficiencies per amplicon was used in the calculation of an estimate of the starting concentration per sample.
Resumo:
The large spatial inhomogeneity in transmit B, field (B-1(+)) observable in human MR images at hi h static magnetic fields (B-0) severely impairs image quality. To overcome this effect in brain T-1-weighted images the, MPRAGE sequence was modified to generate two different images at different inversion times MP2RAGE By combining the two images in a novel fashion, it was possible to create T-1-weigthed images where the result image was free of proton density contrast, T-2* contrast, reception bias field, and, to first order transmit field inhomogeneity. MP2RAGE sequence parameters were optimized using Bloch equations to maximize contrast-to-noise ratio per unit of time between brain tissues and minimize the effect of B-1(+) variations through space. Images of high anatomical quality and excellent brain tissue differentiation suitable for applications such as segmentation and voxel-based morphometry were obtained at 3 and 7 T. From such T-1-weighted images, acquired within 12 min, high-resolution 3D T-1 maps were routinely calculated at 7 T with sub-millimeter voxel resolution (0.65-0.85 mm isotropic). T-1 maps were validated in phantom experiments. In humans, the T, values obtained at 7 T were 1.15 +/- 0.06 s for white matter (WM) and 1.92 +/- 0.16 s for grey matter (GM), in good agreement with literature values obtained at lower spatial resolution. At 3 T, where whole-brain acquisitions with 1 mm isotropic voxels were acquired in 8 min the T-1 values obtained (0.81 +/- 0.03 S for WM and 1.35 +/- 0.05 for GM) were once again found to be in very good agreement with values in the literature. (C) 2009 Elsevier Inc. All rights reserved.
Resumo:
An active learning method is proposed for the semi-automatic selection of training sets in remote sensing image classification. The method adds iteratively to the current training set the unlabeled pixels for which the prediction of an ensemble of classifiers based on bagged training sets show maximum entropy. This way, the algorithm selects the pixels that are the most uncertain and that will improve the model if added in the training set. The user is asked to label such pixels at each iteration. Experiments using support vector machines (SVM) on an 8 classes QuickBird image show the excellent performances of the methods, that equals accuracies of both a model trained with ten times more pixels and a model whose training set has been built using a state-of-the-art SVM specific active learning method
Resumo:
The sensitivities of spleen and lymph node cultures for the diagnosis of canine visceral leishmaniasis were compared in 64 anti-Leishmania antibody positive dogs from an endemic area in Brazil. The sensitivity of spleen cultures for Leishmania detection was 97.9%; in lymph node cultures it was 25%. Positive spleen culture was more frequent (p = 0.048, Fisher's exact probability test) in symptomatic (28 out of 33 animals) than in asymptomatic animals (19 out of 31 animals). These results support the use of spleen instead of lymph node aspiration as the choice method for the parasitological diagnosis of the infection.
Resumo:
The application of DNA-based markers toward the task of discriminating among alternate salmon runs has evolved in accordance with ongoing genomic developments and increasingly has enabled resolution of which genetic markers associate with important life-history differences. Accurate and efficient identification of the most likely origin for salmon encountered during ocean fisheries, or at salvage from fresh water diversion and monitoring facilities, has far-reaching consequences for improving measures for management, restoration and conservation. Near-real-time provision of high-resolution identity information enables prompt response to changes in encounter rates. We thus continue to develop new tools to provide the greatest statistical power for run identification. As a proof of concept for genetic identification improvements, we conducted simulation and blind tests for 623 known-origin Chinook salmon (Oncorhynchus tshawytscha) to compare and contrast the accuracy of different population sampling baselines and microsatellite loci panels. This test included 35 microsatellite loci (1266 alleles), some known to be associated with specific coding regions of functional significance, such as the circadian rhythm cryptochrome genes, and others not known to be associated with any functional importance. The identification of fall run with unprecedented accuracy was demonstrated. Overall, the top performing panel and baseline (HMSC21) were predicted to have a success rate of 98%, but the blind-test success rate was 84%. Findings for bias or non-bias are discussed to target primary areas for further research and resolution.
Deregulated MHC class II transactivator expression leads to a strong Th2 bias in CD4+ T lymphocytes.
Resumo:
The MHC class II (MHC-II) transactivator (CIITA) is the master transcriptional regulator of genes involved in MHC-II-restricted Ag presentation. Fine tuning of CIITA gene expression determines the cell type-specific expression of MHC-II genes. This regulation is achieved by the selective usage of multiple CIITA promoters. It has recently been suggested that CIITA also contributes to Th cell differentiation by suppressing IL-4 expression in Th1 cells. In this study, we show that endogenous CIITA is expressed at low levels in activated mouse T cells. Importantly CIITA is not regulated differentially in murine and human Th1 and Th2 cells. Ectopic expression of a CIITA transgene in multiple mouse cell types including T cells, does not interfere with normal development of CD4(+) T cells. However, upon TCR activation the CIITA transgenic CD4(+) T cells preferentially differentiate into IL-4-secreting Th2-type cells. These results imply that CIITA is not a direct Th1-specific repressor of the IL-4 gene and that tight control over the expression of CIITA and MHC-II is required to maintain the normal balance between Th1 and Th2 responses.
Resumo:
Summary : Forensic science - both as a source of and as a remedy for error potentially leading to judicial error - has been studied empirically in this research. A comprehensive literature review, experimental tests on the influence of observational biases in fingermark comparison, and semistructured interviews with heads of forensic science laboratories/units in Switzerland and abroad were the tools used. For the literature review, some of the areas studied are: the quality of forensic science work in general, the complex interaction between science and law, and specific propositions as to error sources not directly related to the interaction between law and science. A list of potential error sources all the way from the crime scene to the writing of the report has been established as well. For the empirical tests, the ACE-V (Analysis, Comparison, Evaluation, and Verification) process of fingermark comparison was selected as an area of special interest for the study of observational biases, due to its heavy reliance on visual observation and recent cases of misidentifications. Results of the tests performed with forensic science students tend to show that decision-making stages are the most vulnerable to stimuli inducing observational biases. For the semi-structured interviews, eleven senior forensic scientists answered questions on several subjects, for example on potential and existing error sources in their work, of the limitations of what can be done with forensic science, and of the possibilities and tools to minimise errors. Training and education to augment the quality of forensic science have been discussed together with possible solutions to minimise the risk of errors in forensic science. In addition, the time that samples of physical evidence are kept has been determined as well. Results tend to show considerable agreement on most subjects among the international participants. Their opinions on possible explanations for the occurrence of such problems and the relative weight of such errors in the three stages of crime scene, laboratory, and report writing, disagree, however, with opinions widely represented in existing literature. Through the present research it was therefore possible to obtain a better view of the interaction of forensic science and judicial error to propose practical recommendations to minimise their occurrence. Résumé : Les sciences forensiques - considérés aussi bien comme source de que comme remède à l'erreur judiciaire - ont été étudiées empiriquement dans cette recherche. Une revue complète de littérature, des tests expérimentaux sur l'influence du biais de l'observation dans l'individualisation de traces digitales et des entretiens semi-directifs avec des responsables de laboratoires et unités de sciences forensiques en Suisse et à l'étranger étaient les outils utilisés. Pour la revue de littérature, quelques éléments étudies comprennent: la qualité du travail en sciences forensiques en général, l'interaction complexe entre la science et le droit, et des propositions spécifiques quant aux sources d'erreur pas directement liées à l'interaction entre droit et science. Une liste des sources potentielles d'erreur tout le long du processus de la scène de crime à la rédaction du rapport a également été établie. Pour les tests empiriques, le processus d'ACE-V (analyse, comparaison, évaluation et vérification) de l'individualisation de traces digitales a été choisi comme un sujet d'intérêt spécial pour l'étude des effets d'observation, due à son fort recours à l'observation visuelle et dû à des cas récents d'identification erronée. Les résultats des tests avec des étudiants tendent à prouver que les étapes de prise de décision sont les plus vulnérables aux stimuli induisant des biais d'observation. Pour les entretiens semi-structurés, onze forensiciens ont répondu à des questions sur des sujets variés, par exemple sur des sources potentielles et existantes d'erreur dans leur travail, des limitations de ce qui peut être fait en sciences forensiques, et des possibilités et des outils pour réduire au minimum ses erreurs. La formation et l'éducation pour augmenter la qualité des sciences forensiques ont été discutées ainsi que les solutions possibles pour réduire au minimum le risque d'erreurs en sciences forensiques. Le temps que des échantillons sont gardés a été également déterminé. En général, les résultats tendent à montrer un grand accord sur la plupart des sujets abordés pour les divers participants internationaux. Leur avis sur des explications possibles pour l'occurrence de tels problèmes et sur le poids relatif de telles erreurs dans les trois étapes scène de crime;', laboratoire et rédaction de rapports est cependant en désaccord avec les avis largement représentés dans la littérature existante. Par cette recherche il était donc possible d'obtenir une meilleure vue de l'interaction des sciences forensiques et de l'erreur judiciaire afin de proposer des recommandations pratiques pour réduire au minimum leur occurrence. Zusammenfassung : Forensische Wissenschaften - als Ursache und als Hilfsmittel gegen Fehler, die möglicherweise zu Justizirrtümern führen könnten - sind hier empirisch erforscht worden. Die eingestzten Methoden waren eine Literaturübersicht, experimentelle Tests über den Einfluss von Beobachtungseffekten (observer bias) in der Individualisierung von Fingerabdrücken und halbstandardisierte Interviews mit Verantwortlichen von kriminalistischen Labors/Diensten in der Schweiz und im Ausland. Der Literaturüberblick umfasst unter anderem: die Qualität der kriminalistischen Arbeit im Allgemeinen, die komplizierte Interaktion zwischen Wissenschaft und Recht und spezifische Fehlerquellen, welche nicht direkt auf der Interaktion von Recht und Wissenschaft beruhen. Eine Liste möglicher Fehlerquellen vom Tatort zum Rapportschreiben ist zudem erstellt worden. Für die empirischen Tests wurde der ACE-V (Analyse, Vergleich, Auswertung und Überprüfung) Prozess in der Fingerabdruck-Individualisierung als speziell interessantes Fachgebiet für die Studie von Beobachtungseffekten gewählt. Gründe sind die Wichtigkeit von visuellen Beobachtungen und kürzliche Fälle von Fehlidentifizierungen. Resultate der Tests, die mit Studenten durchgeführt wurden, neigen dazu Entscheidungsphasen als die anfälligsten für Stimuli aufzuzeigen, die Beobachtungseffekte anregen könnten. Für die halbstandardisierten Interviews beantworteten elf Forensiker Fragen über Themen wie zum Beispiel mögliche und vorhandene Fehlerquellen in ihrer Arbeit, Grenzen der forensischen Wissenschaften und Möglichkeiten und Mittel um Fehler zu verringern. Wie Training und Ausbildung die Qualität der forensischen Wissenschaften verbessern können ist zusammen mit möglichen Lösungen zur Fehlervermeidung im selben Bereich diskutiert worden. Wie lange Beweismitten aufbewahrt werden wurde auch festgehalten. Resultate neigen dazu, für die meisten Themen eine grosse Übereinstimmung zwischen den verschiedenen internationalen Teilnehmern zu zeigen. Ihre Meinungen über mögliche Erklärungen für das Auftreten solcher Probleme und des relativen Gewichts solcher Fehler in den drei Phasen Tatort, Labor und Rapportschreiben gehen jedoch mit den Meinungen, welche in der Literatur vertreten werden auseinander. Durch diese Forschungsarbeit war es folglich möglich, ein besseres Verständnis der Interaktion von forensischen Wissenschaften und Justizirrtümer zu erhalten, um somit praktische Empfehlungen vorzuschlagen, welche diese verringern. Resumen : Esta investigación ha analizado de manera empírica el rol de las ciencias forenses como fuente y como remedio de potenciales errores judiciales. La metodología empleada consistió en una revisión integral de la literatura, en una serie de experimentos sobre la influencia de los sesgos de observación en la individualización de huellas dactilares y en una serie de entrevistas semiestructuradas con jefes de laboratorios o unidades de ciencias forenses en Suiza y en el extranjero. En la revisión de la literatura, algunas de las áreas estudiadas fueron: la calidad del trabajo en ciencias forenses en general, la interacción compleja entre la ciencia y el derecho, así como otras fuentes de error no relacionadas directamente con la interacción entre derecho y ciencia. También se ha establecido una lista exhaustiva de las fuentes potenciales de error desde la llegada a la escena del crimen a la redacción del informe. En el marco de los tests empíricos, al analizar los sesgos de observación dedicamos especial interés al proceso de ACE-V (análisis, comparación, evaluación y verificación) para la individualización de huellas dactilares puesto que este reposa sobre la observación visual y ha originado varios casos recientes de identificaciones erróneas. Los resultados de las experimentaciones realizadas con estudiantes sugieren que las etapas en las que deben tornarse decisiones son las más vulnerables a lös factores que pueden generar sesgos de observación. En el contexto de las entrevistas semi-estructuradas, once científicos forenses de diversos países contestaron preguntas sobre varios temas, incluyendo las fuentes potenciales y existehtes de error en su trabajo, las limitaciones propias a las ciencias forenses, las posibilidades de reducir al mínimo los errores y las herramientas que podrían ser utilizadas para ello. Se han sugerido diversas soluciones para alcanzar este objetivo, incluyendo el entrenamiento y la educación para aumentar la calidad de las ciencias forenses. Además, se ha establecido el periodo de conservación de las muestras judiciales. Los resultados apuntan a un elevado grado de consenso entre los entrevistados en la mayoría de los temas. Sin embargo, sus opiniones sobre las posibles causas de estos errores y su importancia relativa en las tres etapas de la investigación -la escena del crimen, el laboratorio y la redacción de informe- discrepan con las que predominan ampliamente en la literatura actual. De este modo, esta investigación nos ha permitido obtener una mejor imagen de la interacción entre ciencias forenses y errores judiciales, y comenzar a formular una serie de recomendaciones prácticas para reducirlos al minimo.