929 resultados para likelihood ratio test


Relevância:

90.00% 90.00%

Publicador:

Resumo:

BACKGROUND: Mechanical pain sensitivity is assessed in every patient with pain, either by palpation or by quantitative pressure algometry. Despite widespread use, no studies have formally addressed the usefulness of this practice for the identification of the source of pain. We tested the hypothesis that assessing mechanical pain sensitivity distinguishes damaged from healthy cervical zygapophysial (facet) joints. METHODS: Thirty-three patients with chronic unilateral neck pain were studied. Pressure pain thresholds (PPTs) were assessed bilaterally at all cervical zygapophysial joints. The diagnosis of zygapophysial joint pain was made by selective nerve blocks. Primary analysis was the comparison of the PPT between symptomatic and contralateral asymptomatic joints. The secondary end points were as follows: differences in PPT between affected and asymptomatic joints of the same side of patients with zygapophysial joint pain; differences in PPT at the painful side between patients with and without zygapophysial joint pain; and sensitivity and specificity of PPT for 2 different cutoffs (difference in PPT between affected and contralateral side by 1 and 30 kPa, meaning that the test was considered positive if the difference in PPT between painful and contralateral side was negative by at least 1 and 30 kPa, respectively). The PPT of patients was also compared with the PPT of 12 pain-free subjects. RESULTS: Zygapophysial joint pain was present in 14 patients. In these cases, the difference in mean PPT between affected and contralateral side (primary analysis) was −6.2 kPa (95% confidence interval: −19.5 to 7.2, P = 0.34). In addition, the secondary analyses yielded no statistically significant differences. For the cutoff of 1 kPa, sensitivity and specificity of PPT were 67% and 16%, respectively, resulting in a positive likelihood ratio of 0.79 and a diagnostic confidence of 38%. When the cutoff of 30 kPa was considered, the sensitivity decreased to only 13%, whereas the specificity increased to 95%, resulting in a positive likelihood ratio of 2.53 and a diagnostic confidence of 67%. The PPT was significantly lower in patients than in pain-free subjects (P < 0.001). CONCLUSIONS: Assessing mechanical pain sensitivity is not diagnostic for cervical zygapophysial joint pain. The finding should stimulate further research into a diagnostic tool that is widely used in the clinical examination of patients with pain.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Fluorescence microlymphography (FML) is used to visualize the lymphatic capillaries. A maximum spread of the fluorescence dye of ≥ 12 mm has been suggested for the diagnosis of lymphedema. However, data on sensitivity and specificity are lacking. The aim of this study was to investigate the accuracy of FML for diagnosing lymphedema in patients with leg swelling. Patients with lower extremity swelling were clinically assessed and separated into lymphedema and non-lymphatic edema groups. FML was studied in all affected legs and the maximum spread of lymphatic capillaries was measured. Test accuracy and receiver operator characteristic (ROC) analysis was performed to assess possible threshold values that predict lymphedema. Between March 2008 and August 2011 a total of 171 patients (184 legs) with a median age of 43.5 (IQR 24, 54) years were assessed. Of those, 94 (51.1%) legs were diagnosed with lymphedema. The sensitivity, specificity, positive and negative likelihood ratio and positive and negative predictive value were 87%, 64%, 2.45, 0.20, 72% and 83% for the 12-mm cut-off level and 79%, 83%, 4.72, 0.26, 83% and 79% for the 14-mm cut-off level, respectively. The area under the ROC curve was 0.82 (95% CI: 0.76, 0.88). Sensitivity was higher in the secondary versus primary lymphedema (95.0% vs 74.3%, p = 0.045). No major adverse events were observed. In conclusion, FML is a simple and safe technique for detecting lymphedema in patients with leg swelling. A cut-off level of ≥ 14-mm maximum spread has a high sensitivity and high specificity of detecting lymphedema and should be chosen.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

OBJECTIVE: To determine the accuracy of magnetic resonance imaging criteria for the early diagnosis of multiple sclerosis in patients with suspected disease. DESIGN: Systematic review. DATA SOURCES: 12 electronic databases, citation searches, and reference lists of included studies. Review methods Studies on accuracy of diagnosis that compared magnetic resonance imaging, or diagnostic criteria incorporating such imaging, to a reference standard for the diagnosis of multiple sclerosis. RESULTS: 29 studies (18 cohort studies, 11 other designs) were included. On average, studies of other designs (mainly diagnostic case-control studies) produced higher estimated diagnostic odds ratios than did cohort studies. Among 15 studies of higher methodological quality (cohort design, clinical follow-up as reference standard), those with longer follow-up produced higher estimates of specificity and lower estimates of sensitivity. Only two such studies followed patients for more than 10 years. Even in the presence of many lesions (> 10 or > 8), magnetic resonance imaging could not accurately rule multiple sclerosis in (likelihood ratio of a positive test result 3.0 and 2.0, respectively). Similarly, the absence of lesions was of limited utility in ruling out a diagnosis of multiple sclerosis (likelihood ratio of a negative test result 0.1 and 0.5). CONCLUSIONS: Many evaluations of the accuracy of magnetic resonance imaging for the early detection of multiple sclerosis have produced inflated estimates of test performance owing to methodological weaknesses. Use of magnetic resonance imaging to confirm multiple sclerosis on the basis of a single attack of neurological dysfunction may lead to over-diagnosis and over-treatment.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We introduce a diagnostic test for the mixing distribution in a generalised linear mixed model. The test is based on the difference between the marginal maximum likelihood and conditional maximum likelihood estimates of a subset of the fixed effects in the model. We derive the asymptotic variance of this difference, and propose a test statistic that has a limiting chi-square distribution under the null hypothesis that the mixing distribution is correctly specified. For the important special case of the logistic regression model with random intercepts, we evaluate via simulation the power of the test in finite samples under several alternative distributional forms for the mixing distribution. We illustrate the method by applying it to data from a clinical trial investigating the effects of hormonal contraceptives in women.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

OBJECTIVE: To review the accuracy of electrocardiography in screening for left ventricular hypertrophy in patients with hypertension. DESIGN: Systematic review of studies of test accuracy of six electrocardiographic indexes: the Sokolow-Lyon index, Cornell voltage index, Cornell product index, Gubner index, and Romhilt-Estes scores with thresholds for a positive test of > or =4 points or > or =5 points. DATA SOURCES: Electronic databases ((Pre-)Medline, Embase), reference lists of relevant studies and previous reviews, and experts. STUDY SELECTION: Two reviewers scrutinised abstracts and examined potentially eligible studies. Studies comparing the electrocardiographic index with echocardiography in hypertensive patients and reporting sufficient data were included. DATA EXTRACTION: Data on study populations, echocardiographic criteria, and methodological quality of studies were extracted. DATA SYNTHESIS: Negative likelihood ratios, which indicate to what extent the posterior odds of left ventricular hypertrophy is reduced by a negative test, were calculated. RESULTS: 21 studies and data on 5608 patients were analysed. The median prevalence of left ventricular hypertrophy was 33% (interquartile range 23-41%) in primary care settings (10 studies) and 65% (37-81%) in secondary care settings (11 studies). The median negative likelihood ratio was similar across electrocardiographic indexes, ranging from 0.85 (range 0.34-1.03) for the Romhilt-Estes score (with threshold > or =4 points) to 0.91 (0.70-1.01) for the Gubner index. Using the Romhilt-Estes score in primary care, a negative electrocardiogram result would reduce the typical pre-test probability from 33% to 31%. In secondary care the typical pre-test probability of 65% would be reduced to 63%. CONCLUSION: Electrocardiographic criteria should not be used to rule out left ventricular hypertrophy in patients with hypertension.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Nuclear morphometry (NM) uses image analysis to measure features of the cell nucleus which are classified as: bulk properties, shape or form, and DNA distribution. Studies have used these measurements as diagnostic and prognostic indicators of disease with inconclusive results. The distributional properties of these variables have not been systematically investigated although much of the medical data exhibit nonnormal distributions. Measurements are done on several hundred cells per patient so summary measurements reflecting the underlying distribution are needed.^ Distributional characteristics of 34 NM variables from prostate cancer cells were investigated using graphical and analytical techniques. Cells per sample ranged from 52 to 458. A small sample of patients with benign prostatic hyperplasia (BPH), representing non-cancer cells, was used for general comparison with the cancer cells.^ Data transformations such as log, square root and 1/x did not yield normality as measured by the Shapiro-Wilks test for normality. A modulus transformation, used for distributions having abnormal kurtosis values, also did not produce normality.^ Kernel density histograms of the 34 variables exhibited non-normality and 18 variables also exhibited bimodality. A bimodality coefficient was calculated and 3 variables: DNA concentration, shape and elongation, showed the strongest evidence of bimodality and were studied further.^ Two analytical approaches were used to obtain a summary measure for each variable for each patient: cluster analysis to determine significant clusters and a mixture model analysis using a two component model having a Gaussian distribution with equal variances. The mixture component parameters were used to bootstrap the log likelihood ratio to determine the significant number of components, 1 or 2. These summary measures were used as predictors of disease severity in several proportional odds logistic regression models. The disease severity scale had 5 levels and was constructed of 3 components: extracapsulary penetration (ECP), lymph node involvement (LN+) and seminal vesicle involvement (SV+) which represent surrogate measures of prognosis. The summary measures were not strong predictors of disease severity. There was some indication from the mixture model results that there were changes in mean levels and proportions of the components in the lower severity levels. ^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

I introduce the new mgof command to compute distributional tests for discrete (categorical, multinomial) variables. The command supports largesample tests for complex survey designs and exact tests for small samples as well as classic large-sample x2-approximation tests based on Pearson’s X2, the likelihood ratio, or any other statistic from the power-divergence family (Cressie and Read, 1984, Journal of the Royal Statistical Society, Series B (Methodological) 46: 440–464). The complex survey correction is based on the approach by Rao and Scott (1981, Journal of the American Statistical Association 76: 221–230) and parallels the survey design correction used for independence tests in svy: tabulate. mgof computes the exact tests by using Monte Carlo methods or exhaustive enumeration. mgof also provides an exact one-sample Kolmogorov–Smirnov test for discrete data.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Der Müller und die fünf Räuber, Überfall²³

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Requirements for testing include advance specification of the conditional rate density (probability per unit time, area, and magnitude) or, alternatively, probabilities for specified intervals of time, space, and magnitude. Here I consider testing fully specified hypotheses, with no parameter adjustments or arbitrary decisions allowed during the test period. Because it may take decades to validate prediction methods, it is worthwhile to formulate testable hypotheses carefully in advance. Earthquake prediction generally implies that the probability will be temporarily higher than normal. Such a statement requires knowledge of "normal behavior"--that is, it requires a null hypothesis. Hypotheses can be tested in three ways: (i) by comparing the number of actual earth-quakes to the number predicted, (ii) by comparing the likelihood score of actual earthquakes to the predicted distribution, and (iii) by comparing the likelihood ratio to that of a null hypothesis. The first two tests are purely self-consistency tests, while the third is a direct comparison of two hypotheses. Predictions made without a statement of probability are very difficult to test, and any test must be based on the ratio of earthquakes in and out of the forecast regions.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

mgof computes goodness-of-fit tests for the distribution of a discrete (categorical, multinomial) variable. The default is to perform classical large sample chi-squared approximation tests based on Pearson's X2 statistic and the log likelihood ratio (G2) statistic or a statistic from the Cressie-Read family. Alternatively, mgof computes exact tests using Monte Carlo methods or exhaustive enumeration. A Kolmogorov-Smirnov test for discrete data is also provided. The moremata package, also available from SSC, is required.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A new Stata command called -mgof- is introduced. The command is used to compute distributional tests for discrete (categorical, multinomial) variables. Apart from classic large sample $\chi^2$-approximation tests based on Pearson's $X^2$, the likelihood ratio, or any other statistic from the power-divergence family (Cressie and Read 1984), large sample tests for complex survey designs and exact tests for small samples are supported. The complex survey correction is based on the approach by Rao and Scott (1981) and parallels the survey design correction used for independence tests in -svy:tabulate-. The exact tests are computed using Monte Carlo methods or exhaustive enumeration. An exact Kolmogorov-Smirnov test for discrete data is also provided.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The paper investigates the effects of trade liberalisation on the technical efficiency of the Bangladesh manufacturing sector by estimating a combined stochastic frontier-inefficiency model using panel data for the period 197894 for 25 three-digit level industries. The results show that the overall technical efficiency of the manufacturing sector as well as the technical efficiencies of the majority of the individual industries has increased over time. The findings also clearly suggest that trade liberalisation, proxied by export orientation and capital deepening, has had significant impact on the reduction of the overall technical inefficiency. Similarly, the scale of operation and the proportion of non-production labour in total employment appear as important determinants of technical inefficiency. The evidence also indicates that both export-promoting and import-substituting industries have experienced rises in technical efficiencies over time. Besides, the results are suggestive of neutral technical change, although (at the 5 per cent level of significance) the empirical results indicate that there was no technical change in the manufacturing industries. Finally, the joint test based on the likelihood ratio (LR) test rejects the Cobb-Douglas production technology as description of the database given the specification of the translog production technology.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The progesterone receptor (PR) is a candidate gene for the development of endometriosis, a complex disease with strong hormonal features, common in women of reproductive age. We typed the 306 base pair Alu insertion (AluIns) polymorphism in intron G of PR in 101 individuals, estimated linkage disequilibrium (LD) between five single-nucleotide polymorphisms (SNPs) across the PR locus in 980 Australian triads (endometriosis case and two parents) and used transmission disequilibrium testing (TDT) for association with endometriosis. The five SNPs showed strong pairwise LD, and the AluIns was highly correlated with proximal SNPs rs1042839 ({Delta}2 = 0.877, D9 = 1.00, P < 0.0001) and rs500760 ({Delta}2 = 0.438, D9 = 0.942, P < 0.0001). TDT showed weak evidence of allelic association between endometriosis and rs500760 (P = 0.027) but not in the expected direction. We identified a common susceptibility haplotype GGGCA across the five SNPs (P = 0.0167) in the whole sample, but likelihood ratio testing of haplotype transmission and non-transmission of the AluIns and flanking SNPs showed no significant pattern. Further, analysis of our results pooled with those from two previous studies suggested that neither the T2 allele of the AluIns nor the T1/T2 genotype was associated with endometriosis.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background Atrial fibrillation in the elderly is common and potentially life threatening. The classical sign of atrial fibrillation is an irregularly irregular pulse. Objective The objective of this research was to determine the accuracy of pulse palpation to detect atrial fibrillation. Methods We searched Medline, EMBASE, and the reference lists of review articles for studies that compared pulse palpation with the electrocardiogram (ECG) diagnosis of atrial fibrillation. Two reviewers independently assessed the search results to determine the eligibility of studies, extracted data, and assessed the quality of the studies. Results We identified 3 studies (2385 patients) that compared pulse palpation with ECG. The estimated sensitivity of pulse palpation ranged from 91% to 100%, while specificity ranged from 70% to 77%. Pooled sensitivity was 94% (95% confidence interval [CI], 84%-97%) and pooled specificity was 72% (95% CI 69%-75%). The pooled positive likelihood ratio was 3.39, while the pooled negative likelihood ratio was 0.10. Conclusions Pulse palpation has a high sensitivity but relatively low specificity for atrial fibrillation. It is therefore useful for ruling out atrial fibrillation. It may also be a useful screen to apply opportunistically for previously undetected atrial fibrillation. Assuming a prevalence of 3% for undetected atrial fibrillation in patients older than 65 years, and given the test's sensitivity and specificity, opportunistic pulse palpation in this age group would detect an irregular pulse in 30% of screened patients, requiring further testing with ECG. Among screened patients, 0.2% would have atrial fibrillation undetected with pulse palpation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

2000 Mathematics Subject Classification: 62F25, 62F03.