909 resultados para excursion tests
Resumo:
BACKGROUND: Congestive heart failure (CHF) is a major public health problem. The use of B-type natriuretic peptide (BNP) tests shows promising diagnostic accuracy. Herein, we summarize the evidence on the accuracy of BNP tests in the diagnosis of CHF and compare the performance of rapid enzyme-linked immunosorbent assay (ELISA) and standard radioimmunosorbent assay (RIA) tests. METHODS: We searched electronic databases and the reference lists of included studies, and we contacted experts. Data were extracted on the study population, the type of test used, and methods. Receiver operating characteristic (ROC) plots and summary ROC curves were produced and negative likelihood ratios pooled. Random-effect meta-analysis and metaregression were used to combine data and explore sources of between-study heterogeneity. RESULTS: Nineteen studies describing 22 patient populations (9 ELISA and 13 RIA) and 9093 patients were included. The diagnosis of CHF was verified by echocardiography, radionuclide scan, or echocardiography combined with clinical criteria. The pooled negative likelihood ratio overall from random-effect meta-analysis was 0.18 (95% confidence interval [CI], 0.13-0.23). It was lower for the ELISA test (0.12; 95% CI, 0.09-0.16) than for the RIA test (0.23; 95% CI, 0.16-0.32). For a pretest probability of 20%, which is typical for patients with suspected CHF in primary care, a negative result of the ELISA test would produce a posttest probability of 2.9%; a negative RIA test, a posttest probability of 5.4%. CONCLUSIONS: The use of BNP tests to rule out CHF in primary care settings could reduce demand for echocardiography. The advantages of rapid ELISA tests need to be balanced against their higher cost.
Resumo:
Avidity tests can be used to discriminate between cattle that are acutely and chronically infected with the intracellular parasite Neospora caninum. The aim of this study was to compare the IgG avidity ELISA tests being used in four European laboratories. A coded panel of 200 bovine sera from well documented naturally and experimentally N. caninum infected animals were analysed at the participating laboratories by their respective assay systems and laboratory protocols. Comparing the numeric test results, the concordance correlation coefficients were between 0.479 and 0.776. The laboratories categorize the avidity results into the classes "low" and "high" which are considered indicative of recent and chronic infection, respectively. Three laboratories also use an "intermediate" class. When the categorized data were analysed by Kappa statistics there was moderate to substantial agreements between the laboratories. There was an overall better agreement for dichotomized results than when an intermediate class was also used. Taken together, this first ring test for N. caninum IgG avidity assays showed a moderate agreement between the assays used by the different laboratories to estimate the IgG avidity. Our experience suggests that avidity tests are sometimes less robust than conventional ELISAs. Therefore, it is essential that they are carefully standardised and their performance continuously evaluated.
Resumo:
There are numerous statistical methods for quantitative trait linkage analysis in human studies. An ideal such method would have high power to detect genetic loci contributing to the trait, would be robust to non-normality in the phenotype distribution, would be appropriate for general pedigrees, would allow the incorporation of environmental covariates, and would be appropriate in the presence of selective sampling. We recently described a general framework for quantitative trait linkage analysis, based on generalized estimating equations, for which many current methods are special cases. This procedure is appropriate for general pedigrees and easily accommodates environmental covariates. In this paper, we use computer simulations to investigate the power robustness of a variety of linkage test statistics built upon our general framework. We also propose two novel test statistics that take account of higher moments of the phenotype distribution, in order to accommodate non-normality. These new linkage tests are shown to have high power and to be robust to non-normality. While we have not yet examined the performance of our procedures in the context of selective sampling via computer simulations, the proposed tests satisfy all of the other qualities of an ideal quantitative trait linkage analysis method.
Resumo:
High-throughput SNP arrays provide estimates of genotypes for up to one million loci, often used in genome-wide association studies. While these estimates are typically very accurate, genotyping errors do occur, which can influence in particular the most extreme test statistics and p-values. Estimates for the genotype uncertainties are also available, although typically ignored. In this manuscript, we develop a framework to incorporate these genotype uncertainties in case-control studies for any genetic model. We verify that using the assumption of a “local alternative” in the score test is very reasonable for effect sizes typically seen in SNP association studies, and show that the power of the score test is simply a function of the correlation of the genotype probabilities with the true genotypes. We demonstrate that the power to detect a true association can be substantially increased for difficult to call genotypes, resulting in improved inference in association studies.
Resumo:
It is of interest in some applications to determine whether there is a relationship between a hazard rate function (or a cumulative incidence function) and a mark variable which is only observed at uncensored failure times. We develop nonparametric tests for this problem when the mark variable is continuous. Tests are developed for the null hypothesis that the mark-specific hazard rate is independent of the mark versus ordered and two-sided alternatives expressed in terms of mark-specific hazard functions and mark-specific cumulative incidence functions. The test statistics are based on functionals of a bivariate test process equal to a weighted average of differences between a Nelson--Aalen-type estimator of the mark-specific cumulative hazard function and a nonparametric estimator of this function under the null hypothesis. The weight function in the test process can be chosen so that the test statistics are asymptotically distribution-free.Asymptotically correct critical values are obtained through a simple simulation procedure. The testing procedures are shown to perform well in numerical studies, and are illustrated with an AIDS clinical trial example. Specifically, the tests are used to assess if the instantaneous or absolute risk of treatment failure depends on the amount of accumulation of drug resistance mutations in a subject's HIV virus. This assessment helps guide development of anti-HIV therapies that surmount the problem of drug resistance.
Resumo:
This paper is the fourth in a series of reviews that will summarize available data and critically discuss the potential role of lung-function testing in infants with acute neonatal respiratory disorders and chronic lung disease of infancy. The current paper addresses information derived from tidal breathing measurements within the framework outlined in the introductory paper of this series, with particular reference to how these measurements inform on control of breathing. Infants with acute and chronic respiratory illness demonstrate differences in tidal breathing and its control that are of clinical consequence and can be measured objectively. The increased incidence of significant apnea in preterm infants and infants with chronic lung disease, together with the reportedly increased risk of sudden unexplained death within the latter group, suggests that control of breathing is affected by both maturation and disease. Clinical observations are supported by formal comparison of tidal breathing parameters and control of breathing indices in the research setting.
Resumo:
Equivalence testing is growing in use in scientific research outside of its traditional role in the drug approval process. Largely due to its ease of use and recommendation from the United States Food and Drug Administration guidance, the most common statistical method for testing (bio)equivalence is the two one-sided tests procedure (TOST). Like classical point-null hypothesis testing, TOST is subject to multiplicity concerns as more comparisons are made. In this manuscript, a condition that bounds the family-wise error rate (FWER) using TOST is given. This condition then leads to a simple solution for controlling the FWER. Specifically, we demonstrate that if all pairwise comparisons of k independent groups are being evaluated for equivalence, then simply scaling the nominal Type I error rate down by (k - 1) is sufficient to maintain the family-wise error rate at the desired value or less. The resulting rule is much less conservative than the equally simple Bonferroni correction. An example of equivalence testing in a non drug-development setting is given.
Resumo:
In evaluating the accuracy of diagnosis tests, it is common to apply two imperfect tests jointly or sequentially to a study population. In a recent meta-analysis of the accuracy of microsatellite instability testing (MSI) and traditional mutation analysis (MUT) in predicting germline mutations of the mismatch repair (MMR) genes, a Bayesian approach (Chen, Watson, and Parmigiani 2005) was proposed to handle missing data resulting from partial testing and the lack of a gold standard. In this paper, we demonstrate an improved estimation of the sensitivities and specificities of MSI and MUT by using a nonlinear mixed model and a Bayesian hierarchical model, both of which account for the heterogeneity across studies through study-specific random effects. The methods can be used to estimate the accuracy of two imperfect diagnostic tests in other meta-analyses when the prevalence of disease, the sensitivities and/or the specificities of diagnostic tests are heterogeneous among studies. Furthermore, simulation studies have demonstrated the importance of carefully selecting appropriate random effects on the estimation of diagnostic accuracy measurements in this scenario.
Resumo:
OBJECTIVE: To consider the reasons and context for test ordering by doctors when faced with an undiagnosed complaint in primary or secondary care. STUDY DESIGN AND SETTING: We reviewed any study of any design that discussed factors that may affect a doctor's decision to order a test. Articles were located through searches of electronic databases, authors' files on diagnostic methodology, and reference lists of relevant studies. We extracted data on: study design, type of analysis, setting, topic area, and any factors reported to influence test ordering. RESULTS: We included 37 studies. We carried out a thematic analysis to synthesize data. Five key groupings arose from this process: diagnostic factors, therapeutic and prognostic factors, patient-related factors, doctor-related factors, and policy and organization-related factors. To illustrate how the various factors identified may influence test ordering we considered the symptom low back pain and the diagnosis multiple sclerosis as examples. CONCLUSIONS: A wide variety of factors influence a doctor's decision to order a test. These are integral to understanding diagnosis in clinical practice. Traditional diagnostic accuracy studies should be supplemented with research into the broader context in which doctors perform their work.