887 resultados para SAMPLE ERROR
Resumo:
In this study, Ascaris DNA was extracted and sequenced from a medieval archaeological sample in Korea. While Ascaris eggs were confirmed to be of human origin by archaeological evidence, it was not possible to pinpoint the exact species due to close genetic relationships among them. Despite this shortcoming, this is the first Ascaris ancient DNA (aDNA) report from a medieval Asian country and thus will expand the scope of Ascaris aDNA research.
Resumo:
Precision of released figures is not only an important quality feature of official statistics,it is also essential for a good understanding of the data. In this paper we show a casestudy of how precision could be conveyed if the multivariate nature of data has to betaken into account. In the official release of the Swiss earnings structure survey, the totalsalary is broken down into several wage components. We follow Aitchison's approachfor the analysis of compositional data, which is based on logratios of components. Wefirst present diferent multivariate analyses of the compositional data whereby the wagecomponents are broken down by economic activity classes. Then we propose a numberof ways to assess precision
Resumo:
The author studies the error and complexity of the discrete random walk Monte Carlo technique for radiosity, using both the shooting and gathering methods. The author shows that the shooting method exhibits a lower complexity than the gathering one, and under some constraints, it has a linear complexity. This is an improvement over a previous result that pointed to an O(n log n) complexity. The author gives and compares three unbiased estimators for each method, and obtains closed forms and bounds for their variances. The author also bounds the expected value of the mean square error (MSE). Some of the results obtained are also shown
Resumo:
Introduction: Ethylglucuronide (EtG) is a direct and specific metabolite of ethanol. Its determination in hair is of increasing interest for detecting and monitoring alcohol abuse. The quantification of EtG in hair requires analytical methods showing highest sensitivity and specificity. We present a fully validated method based on gas chromatography-negative chemical ionization tandem mass spectrometry (GC-NCI-MS/MS). The method was validated using French Society of Pharmaceutical Sciences and Techniques (SFSTP) guidelines which are based on the determination of the total measurement error and accuracy profiles. Methods: Washed and powdered hair is extracted in water using an ultrasonic incubation. After purification by Oasis MAX solid phase extraction, the derivatized EtG is detected and quantified by GC-NCI-MS/MS method in the selected reaction monitoring mode. The transitions m/z 347 / 163 and m/z 347 / 119 were used for the quantification and identification of EtG. Four quality controls (QC) prepared with hair samples taken post mortem from 2 subjects with a known history of alcoholism were used. A proficiency test with 7 participating laboratories was first run to validate the EtG concentration of each QC sample. Considering the results of this test, these samples were then used as internal controls for validation of the method. Results: The mean EtG concentrations measured in the 4 QC were 259.4, 130.4, 40.8, and 8.4 pg/mg hair. Method validation has shown linearity between 8.4 and 259.4 pg/mg hair (r2 > 0.999). The lower limit of quantification was set up at 8.4 pg/mg. Repeatability and intermediate precision were found less than 13.2% for all concentrations tested. Conclusion: The method proved to be suitable for routine analysis of EtG in hair. GC-NCI-MS/MS method was then successfully applied to the analysis of EtG in hair samples collected from different alcohol consumers.
Resumo:
Paracoccidioidomycosis is diagnosed from the direct observation of the causative agent, but serology can facilitate and decrease the time required for diagnosis. The objective of this study was to determine the influence of serum sample inactivation on the performance of the latex agglutination test (LAT) for detecting antibodies against Paracoccidioides brasiliensis. The sensitivity of LAT from inactivated or non-inactivated samples was 73% and 83%, respectively and the LAT selectivity was 79% and 90%, respectively. The LAT evaluated here was no more specific than the double-immunodiffusion assay. We suggest the investigation of other methods for improving the LAT, such as the use of deglycosylated antigen.
Resumo:
The efficiency of the Mosquito Magnet Liberty PlusTM (MMLP) trap was evaluated in comparison to human-landing catches (HLCs) to sample anopheline populations in Jabillal, state of Bolivar, southern Venezuela. The village comprised 37 houses and a population of 101; malaria in this village is primarily due to Plasmodium vivax and the Annual Parasite Index is 316.8 per 1,000 population. A longitudinal study was conducted between June 2008-January 2009 for three nights per month every two months between 17:30 pm-21:30 pm, a time when biting mosquitoes are most active. Anopheles darlingi and Anopheles nuneztovari were the most common species collected by both methods, whereas Anopheles marajoara was more abundant according to the HLC method. The MMLP trap was more efficient for collecting An. nuneztovari [63%, confidence interval (CI): 2.53] than for collecting An. darlingi (31%, CI: 1.57). There were significant correlations (p < 0.01) between the two methods for An. darlingi [Pearson correlation (R²) = 0.65] and An. nuneztovari (R² = 0.48). These preliminary results are encouraging for further investigations of the use of the MMLP trap for monitoring anopheline populations in remote malaria-endemic areas in the Amazon Basin.
Resumo:
Given an observed test statistic and its degrees of freedom, one may compute the observed P value with most statistical packages. It is unknown to what extent test statistics and P values are congruent in published medical papers. Methods:We checked the congruence of statistical results reported in all the papers of volumes 409–412 of Nature (2001) and a random sample of 63 results from volumes 322–323 of BMJ (2001). We also tested whether the frequencies of the last digit of a sample of 610 test statistics deviated from a uniform distribution (i.e., equally probable digits).Results: 11.6% (21 of 181) and 11.1% (7 of 63) of the statistical results published in Nature and BMJ respectively during 2001 were incongruent, probably mostly due to rounding, transcription, or type-setting errors. At least one such error appeared in 38% and 25% of the papers of Nature and BMJ, respectively. In 12% of the cases, the significance level might change one or more orders of magnitude. The frequencies of the last digit of statistics deviated from the uniform distribution and suggested digit preference in rounding and reporting.Conclusions: this incongruence of test statistics and P values is another example that statistical practice is generally poor, even in the most renowned scientific journals, and that quality of papers should be more controlled and valued
Resumo:
Stool is chemically complex and the extraction of DNA from stool samples is extremely difficult. Haemoglobin breakdown products, such as bilirubin, bile acids and mineral ions, that are present in the stool samples, can inhibit DNA amplification and cause molecular assays to produce false-negative results. Therefore, stool storage conditions are highly important for the diagnosis of intestinal parasites and other microorganisms through molecular approaches. In the current study, stool samples that were positive for Giardia intestinalis were collected from five different patients. Each sample was stored using one out of six different storage conditions [room temperature (RT), +4ºC, -20ºC, 70% alcohol, 10% formaldehyde or 2.5% potassium dichromate] for DNA extraction procedures at one, two, three and four weeks. A modified QIAamp Stool Mini Kit procedure was used to isolate the DNA from stored samples. After DNA isolation, polymerase chain reaction (PCR) amplification was performed using primers that target the β-giardin gene. A G. intestinalis-specific 384 bp band was obtained from all of the cyst-containing stool samples that were stored at RT, +4ºC and -20ºC and in 70% alcohol and 2.5% potassium dichromate; however, this band was not produced by samples that had been stored in 10% formaldehyde. Moreover, for the stool samples containing trophozoites, the same G. intestinalis-specific band was only obtained from the samples that were stored in 2.5% potassium dichromate for up to one month. As a result, it appears evident that the most suitable storage condition for stool samples to permit the isolation of G. intestinalis DNA is in 2.5% potassium dichromate; under these conditions, stool samples may be stored for one month.
Resumo:
BACKGROUND Functional brain images such as Single-Photon Emission Computed Tomography (SPECT) and Positron Emission Tomography (PET) have been widely used to guide the clinicians in the Alzheimer's Disease (AD) diagnosis. However, the subjectivity involved in their evaluation has favoured the development of Computer Aided Diagnosis (CAD) Systems. METHODS It is proposed a novel combination of feature extraction techniques to improve the diagnosis of AD. Firstly, Regions of Interest (ROIs) are selected by means of a t-test carried out on 3D Normalised Mean Square Error (NMSE) features restricted to be located within a predefined brain activation mask. In order to address the small sample-size problem, the dimension of the feature space was further reduced by: Large Margin Nearest Neighbours using a rectangular matrix (LMNN-RECT), Principal Component Analysis (PCA) or Partial Least Squares (PLS) (the two latter also analysed with a LMNN transformation). Regarding the classifiers, kernel Support Vector Machines (SVMs) and LMNN using Euclidean, Mahalanobis and Energy-based metrics were compared. RESULTS Several experiments were conducted in order to evaluate the proposed LMNN-based feature extraction algorithms and its benefits as: i) linear transformation of the PLS or PCA reduced data, ii) feature reduction technique, and iii) classifier (with Euclidean, Mahalanobis or Energy-based methodology). The system was evaluated by means of k-fold cross-validation yielding accuracy, sensitivity and specificity values of 92.78%, 91.07% and 95.12% (for SPECT) and 90.67%, 88% and 93.33% (for PET), respectively, when a NMSE-PLS-LMNN feature extraction method was used in combination with a SVM classifier, thus outperforming recently reported baseline methods. CONCLUSIONS All the proposed methods turned out to be a valid solution for the presented problem. One of the advances is the robustness of the LMNN algorithm that not only provides higher separation rate between the classes but it also makes (in combination with NMSE and PLS) this rate variation more stable. In addition, their generalization ability is another advance since several experiments were performed on two image modalities (SPECT and PET).
Resumo:
Raman spectroscopy has become an attractive tool for the analysis of pharmaceutical solid dosage forms. In the present study it is used to ensure the identity of tablets. The two main applications of this method are release of final products in quality control and detection of counterfeits. Twenty-five product families of tablets have been included in the spectral library and a non-linear classification method, the Support Vector Machines (SVMs), has been employed. Two calibrations have been developed in cascade: the first one identifies the product family while the second one specifies the formulation. A product family comprises different formulations that have the same active pharmaceutical ingredient (API) but in a different amount. Once the tablets have been classified by the SVM model, API peaks detection and correlation are applied in order to have a specific method for the identification and allow in the future to discriminate counterfeits from genuine products. This calibration strategy enables the identification of 25 product families without error and in the absence of prior information about the sample. Raman spectroscopy coupled with chemometrics is therefore a fast and accurate tool for the identification of pharmaceutical tablets.
Resumo:
BACKGROUND Missed, delayed or incorrect diagnoses are considered to be diagnostic errors. The aim of this paper is to describe the methodology of a study to analyse cognitive aspects of the process by which primary care (PC) physicians diagnose dyspnoea. It examines the possible links between the use of heuristics, suboptimal cognitive acts and diagnostic errors, using Reason's taxonomy of human error (slips, lapses, mistakes and violations). The influence of situational factors (professional experience, perceived overwork and fatigue) is also analysed. METHODS Cohort study of new episodes of dyspnoea in patients receiving care from family physicians and residents at PC centres in Granada (Spain). With an initial expected diagnostic error rate of 20%, and a sampling error of 3%, 384 episodes of dyspnoea are calculated to be required. In addition to filling out the electronic medical record of the patients attended, each physician fills out 2 specially designed questionnaires about the diagnostic process performed in each case of dyspnoea. The first questionnaire includes questions on the physician's initial diagnostic impression, the 3 most likely diagnoses (in order of likelihood), and the diagnosis reached after the initial medical history and physical examination. It also includes items on the physicians' perceived overwork and fatigue during patient care. The second questionnaire records the confirmed diagnosis once it is reached. The complete diagnostic process is peer-reviewed to identify and classify the diagnostic errors. The possible use of heuristics of representativeness, availability, and anchoring and adjustment in each diagnostic process is also analysed. Each audit is reviewed with the physician responsible for the diagnostic process. Finally, logistic regression models are used to determine if there are differences in the diagnostic error variables based on the heuristics identified. DISCUSSION This work sets out a new approach to studying the diagnostic decision-making process in PC, taking advantage of new technologies which allow immediate recording of the decision-making process.
Resumo:
In epidemiologic studies, measurement error in dietary variables often attenuates association between dietary intake and disease occurrence. To adjust for the attenuation caused by error in dietary intake, regression calibration is commonly used. To apply regression calibration, unbiased reference measurements are required. Short-term reference measurements for foods that are not consumed daily contain excess zeroes that pose challenges in the calibration model. We adapted two-part regression calibration model, initially developed for multiple replicates of reference measurements per individual to a single-replicate setting. We showed how to handle excess zero reference measurements by two-step modeling approach, how to explore heteroscedasticity in the consumed amount with variance-mean graph, how to explore nonlinearity with the generalized additive modeling (GAM) and the empirical logit approaches, and how to select covariates in the calibration model. The performance of two-part calibration model was compared with the one-part counterpart. We used vegetable intake and mortality data from European Prospective Investigation on Cancer and Nutrition (EPIC) study. In the EPIC, reference measurements were taken with 24-hour recalls. For each of the three vegetable subgroups assessed separately, correcting for error with an appropriately specified two-part calibration model resulted in about three fold increase in the strength of association with all-cause mortality, as measured by the log hazard ratio. Further found is that the standard way of including covariates in the calibration model can lead to over fitting the two-part calibration model. Moreover, the extent of adjusting for error is influenced by the number and forms of covariates in the calibration model. For episodically consumed foods, we advise researchers to pay special attention to response distribution, nonlinearity, and covariate inclusion in specifying the calibration model.
Resumo:
In the last few years, many researchers have studied the presence of common dimensions of temperament in subjects with symptoms of anxiety. The aim of this study is to examine the association between temperamental dimensions (high negative affect and activity level) and anxiety problems in clinicalpreschool children. A total of 38 children, ages 3 to 6 years, from the Infant and Adolescent Mental Health Center of Girona and the Center of Diagnosis and Early Attention of Sabadell and Olot were evaluated by parents and psychologists. Their parents completed several screening scales and, subsequently, clinical child psychopathology professionals carried out diagnostic interviews with children from the sample who presented signs of anxiety. Findings showed that children with high levels of negative affect and low activity level have pronounced symptoms of anxiety. However, children with anxiety disorders do not present different temperament styles from their peers without these pathologies
Resumo:
Natural resistance-associated macrophage protein 1/solute carrier family 11 member 1 gene (Nramp1/Slc11a1) is a gene that controls the susceptibility of inbred mice to intracellular pathogens. Polymorphisms in the human Slc11a1/Nramp1 gene have been associated with host susceptibility to leprosy. This study has evaluated nine polymorphisms of the Slc11a1/Nramp1 gene [(GT)n, 274C/T, 469+14G/C, 577-18G/A, 823C/T, 1029 C/T, 1465-85G/A, 1703G/A, and 1729+55del4] in 86 leprosy patients (67 and 19 patients had the multibacillary and the paucibacillary clinical forms of the disease, respectively), and 239 healthy controls matched by age, gender, and ethnicity. The frequency of allele 2 of the (GT)n polymorphism was higher in leprosy patients [p = 0.04, odds ratio (OR) = 1.49], whereas the frequency of allele 3 was higher in the control group (p = 0.03; OR = 0.66). Patients carrying the 274T allele (p = 0.04; OR = 1.49) and TT homozygosis (p = 0.02; OR = 2.46), such as the 469+14C allele (p = 0.03; OR = 1.53) of the 274C/T and 469+14G/C polymorphisms, respectively, were more frequent in the leprosy group. The leprosy and control groups had similar frequency of the 577-18G/A, 823C/T, 1029C/T, 1465-85G/A, 1703G/A, and 1729+55del4 polymorphisms. The 274C/T polymorphism in exon 3 and the 469+14G/C polymorphism in intron 4 were associated with susceptibility to leprosy, while the allele 2 and 3 of the (GT)n polymorphism in the promoter region were associated with susceptibility and protection to leprosy, respectively.
Resumo:
Chromatin immunoprecipitation followed by deep sequencing (ChIP-seq) experiments are widely used to determine, within entire genomes, the occupancy sites of any protein of interest, including, for example, transcription factors, RNA polymerases, or histones with or without various modifications. In addition to allowing the determination of occupancy sites within one cell type and under one condition, this method allows, in principle, the establishment and comparison of occupancy maps in various cell types, tissues, and conditions. Such comparisons require, however, that samples be normalized. Widely used normalization methods that include a quantile normalization step perform well when factor occupancy varies at a subset of sites, but may miss uniform genome-wide increases or decreases in site occupancy. We describe a spike adjustment procedure (SAP) that, unlike commonly used normalization methods intervening at the analysis stage, entails an experimental step prior to immunoprecipitation. A constant, low amount from a single batch of chromatin of a foreign genome is added to the experimental chromatin. This "spike" chromatin then serves as an internal control to which the experimental signals can be adjusted. We show that the method improves similarity between replicates and reveals biological differences including global and largely uniform changes.