53 resultados para Level of confidence
Resumo:
Background: Alcohol is heavily consumed in sub-Saharan Africa and affects HIV transmission and treatment and is difficult to measure. Our goal was to examine the test characteristics of a direct metabolite of alcohol consumption, phosphatidylethanol (PEth). Methods: Persons infected with HIV were recruited from a large HIV clinic in southwestern Uganda. We conducted surveys and breath alcohol concentration (BRAC) testing at 21 daily home or drinking establishment visits, and blood was collected on day 21 (n = 77). PEth in whole blood was compared with prior 7-, 14-, and 21-day alcohol consumption. Results: (i) The receiver operator characteristic area under the curve (ROC-AUC) was highest for PEth versus any consumption over the prior 21 days (0.92; 95% confidence interval [CI]: 0.86 to 0.97). The sensitivity for any detectable PEth was 88.0% (95% CI: 76.0 to 95.6) and the specificity was 88.5% (95% CI: 69.8 to 97.6). (ii) The ROC-AUC of PEth versus any 21-day alcohol consumption did not vary with age, body mass index, CD4 cell count, hepatitis B virus infection, and antiretroviral therapy status, but was higher for men compared with women (p = 0.03). (iii) PEth measurements were correlated with several measures of alcohol consumption, including number of drinking days in the prior 21 days (Spearman r = 0.74, p < 0.001) and BRAC (r = 0.75, p < 0.001). Conclusions: The data add support to the body of evidence for PEth as a useful marker of alcohol consumption with high ROC-AUC, sensitivity, and specificity. Future studies should further address the period and level of alcohol consumption for which PEth is detectable.
Resumo:
Partner notification (PN or contact tracing) is an important aspect of treating bacterial sexually transmitted infections (STIs), such as Chlamydia trachomatis. It facilitates the identification of new infected cases that can be treated through individual case management. PN also acts indirectly by limiting onward transmission in the general population. However, the impact of PN, both at the level of individuals and the population, remains unclear. Since it is difficult to study the effects of PN empirically, mathematical and computational models are useful tools for investigating its potential as a public health intervention. To this end, we developed an individual-based modeling framework called Rstisim. It allows the implementation of different models of STI transmission with various levels of complexity and the reconstruction of the complete dynamic sexual partnership network over any time period. A key feature of this framework is that we can trace an individual's partnership history in detail and investigate the outcome of different PN strategies for C. trachomatis. For individual case management, the results suggest that notifying three or more partners from the preceding 18 months yields substantial numbers of new cases. In contrast, the successful treatment of current partners is most important for preventing re-infection of index cases and reducing further transmission of C. trachomatis at the population level. The findings of this study demonstrate the difference between individual and population level outcomes of public health interventions for STIs.
Resumo:
OBJECTIVE: To compare the content covered by twelve obesity-specific health status measures using the International Classification of Functioning, Disability and Health (ICF). DESIGN: Obesity-specific health status measures were identified and then linked to the ICF separately by two trained health professionals according to standardized guidelines. The degree of agreement between health professionals was calculated by means of the kappa (kappa) statistic. Bootstrapped confidence intervals (CI) were calculated. The obesity-specific health-status measures were compared on the component and category level of the ICF. MEASUREMENTS: welve condition-specific health-status measures were identified and included in this study, namely the obesity-related problem scale, the obesity eating problems scale, the obesity-related coping and obesity-related distress questionnaire, the impact of weight on quality of life questionnaire (short version), the health-related quality of life questionnaire, the obesity adjustment survey (short form), the short specific quality of life scale, the obesity-related well-being questionnaire, the bariatric analysis and reporting outcome system, the bariatric quality of life index, the obesity and weight loss quality of life questionnaire and the weight-related symptom measure. RESULTS: In the 280 items of the eight measures, a total of 413 concepts were identified and linked to the 87 different ICF categories. The measures varied strongly in the number of concepts contained and the number of ICF categories used to map these concepts. Items on body functions varied form 12% in the obesity-related problem scale to 95% in the weight-related symptom measure. The estimated kappa coefficients ranged between 0.79 (CI: 0.72, 0.86) at the component ICFs level and 0.97 (CI: 0.93, 1.0) at the third ICF's level. CONCLUSION: The ICF proved highly useful for the content comparison of obesity-specific health-status measures. The results may provide clinicians and researchers with new insights when selecting health-status measures for clinical studies in obesity.
Resumo:
PURPOSE: To determine sensitivity, specificity and inter-observer variability of different whole-body MRI (WB-MRI) sequences in patients with multiple myeloma (MM). METHODS AND MATERIALS: WB-MRI using a 1.5T MRI scanner was performed in 23 consecutive patients (13 males, 10 females; mean age 63+/-12 years) with histologically proven MM. All patients were clinically classified according to infiltration (low-grade, n=7; intermediate-grade, n=7; high-grade, n=9) and to the staging system of Durie and Salmon PLUS (stage I, n=12; stage II, n=4; stage III, n=7). The control group consisted of 36 individuals without malignancy (25 males, 11 females; mean age 57+/-13 years). Two observers independently evaluated the following WB-MRI sequences: T1w-TSE (T1), T2w-TIRM (T2), and the combination of both sequences, including a contrast-enhanced T1w-TSE with fat-saturation (T1+/-CE/T2). They had to determine growth patterns (focal and/or diffuse) and the MRI sequence that provided the highest confidence level in depicting the MM lesions. Results were calculated on a per-patient basis. RESULTS: Visual detection of MM was as follows: T1, 65% (sensitivity)/85% (specificity); T2, 76%/81%; T1+/-CE/T2, 67%/88%. Inter-observer variability was as follows: T1, 0.3; T2, 0.55; T1+/-CE/T2, 0.55. Sensitivity improved depending on infiltration grade (T1: 1=60%; 2=36%; 3=83%; T2: 1=70%; 2=71%; 3=89%; T1+/-CE/T2: 1=50%; 2=50%; 3=89%) and clinical stage (T1: 1=58%; 2=63%; 3=79%; T2: 1=58%; 2=88%; 3=100%; T1+/-CE/T2: 1=50%; 2=63%; 3=100%). T2w-TIRM sequences achieved the best reliability in depicting the MM lesions (65% in the mean of both readers). CONCLUSIONS: T2w-TIRM sequences achieved the highest level of sensitivity and best reliability, and thus might be valuable for initial assessment of MM. For an exact staging and grading the examination protocol should encompass unenhanced and enhanced T1w-MRI sequences, in addition to T2w-TIRM.
Resumo:
Activating epidermal growth factor receptor (EGFR) mutations are recognized biomarkers for patients with metastatic non-small cell lung cancer (NSCLC) treated with EGFR tyrosine kinase inhibitors (TKIs). EGFR TKIs can also have activity against NSCLC without EGFR mutations, requiring the identification of additional relevant biomarkers. Previous studies on tumor EGFR protein levels and EGFR gene copy number revealed inconsistent results. The aim of the study was to identify novel biomarkers of the response to TKIs in NSCLC by investigating whole genome expression at the exon-level. We used exon arrays and clinical samples from a previous trial (SAKK19/05) to investigate the expression variations at the exon-level of 3 genes potentially playing a key role in modulating treatment response: EGFR, V-Ki-ras2 Kirsten rat sarcoma viral oncogene homolog (KRAS) and vascular endothelial growth factor (VEGFA). We identified the expression of EGFR exon 18 as a new predictive marker for patients with untreated metastatic NSCLC treated with bevacizumab and erlotinib in the first line setting. The overexpression of EGFR exon 18 in tumor was significantly associated with tumor shrinkage, independently of EGFR mutation status. A similar significant association could be found in blood samples. In conclusion, exonic EGFR expression particularly in exon 18 was found to be a relevant predictive biomarker for response to bevacizumab and erlotinib. Based on these results, we propose a new model of EGFR testing in tumor and blood.
Resumo:
The Sun shows strong variability in its magnetic activity, from Grand minima to Grand maxima, but the nature of the variability is not fully understood, mostly because of the insufficient length of the directly observed solar activity records and of uncertainties related to long-term reconstructions. Here we present a new adjustment-free reconstruction of solar activity over three millennia and study its different modes. Methods. We present a new adjustment-free, physical reconstruction of solar activity over the past three millennia, using the latest verified carbon cycle, 14C production, and archeomagnetic field models. This great improvement allowed us to study different modes of solar activity at an unprecedented level of details. Results. The distribution of solar activity is clearly bi-modal, implying the existence of distinct modes of activity. The main regular activity mode corresponds to moderate activity that varies in a relatively narrow band between sunspot numbers 20 and 67. The existence of a separate Grand minimum mode with reduced solar activity, which cannot be explained by random fluctuations of the regular mode, is confirmed at a high confidence level. The possible existence of a separate Grand maximum mode is also suggested, but the statistics is too low to reach a confident conclusion. Conclusions. The Sun is shown to operate in distinct modes – a main general mode, a Grand minimum mode corresponding to an inactive Sun, and a possible Grand maximum mode corresponding to an unusually active Sun. These results provide important constraints for both dynamo models of Sun-like stars and investigations of possible solar influence on Earth’s climate.
Resumo:
The isotopic abundance of 85Kr in the atmosphere, currently at the level of 10−11, has increased by orders of magnitude since the dawn of nuclear age. With a half-life of 10.76 years, 85Kr is of great interest as tracers for environmental samples such as air, groundwater and ice. Atom Trap Trace Analysis (ATTA) is an emerging method for the analysis of rare krypton isotopes at isotopic abundance levels as low as 10−14 using krypton gas samples of a few micro-liters. Both the reliability and reproducibility of the method are examined in the present study by an inter-comparison among different instruments. The 85Kr/Kr ratios of 12 samples, in the range of 10−13 to 10−10, are measured independently in three laboratories: a low-level counting laboratory in Bern, Switzerland, and two ATTA laboratories, one in Hefei, China, and another in Argonne, USA. The results are in agreement at the precision level of 5%.
Resumo:
The closed Tangra Yumco Basin underwent the strongest Quaternary lake-level changes so far recorded on the Tibetan Plateau. It was hitherto unknown what effect this had on local Holocene vegetation development. A 3.6-m sediment core from a recessional lake terrace at 4,700 m a.s.l., 160 m above the present lake level of Tangra Yumco, was studied to reconstruct Holocene flooding phases (sedimentology and ostracod analyses), vegetation dynamics and human influence (palynology, charcoal and coprophilous fungi analyses). Peat at the base of the profile proves lake level was below 4,700 m a.s.l. during the Pleistocene/Holocene transition. A deep-lake phase started after 11 cal ka BP, but the ostracod record indicates the level was not higher than similar to 4,720 m a.s.l. (180 m above present) and decreased gradually after the early Holocene maximum. Additional sediment ages from the basin suggest recession of Tangra Yumco from the coring site after 2.6 cal ka BP, with a shallow local lake persisting at the site until similar to 1 cal ka BP. The final peat formation indicates drier conditions thereafter. Persistence of Artemisia steppe during the Holocene lake high-stand resembles palynological records from west Tibet that indicate early Holocene aridity, in spite of high lake levels that may have resulted from meltwater input. Yet pollen assemblages indicate humidity closer to that of present potential forest areas near Lhasa, with 500-600 mm annual precipitation. Thus, the early mid-Holocene humidity was sufficient to sustain at least juniper forest, but Artemisia dominance persisted as a consequence of a combination of environmental disturbances such as (1) strong early Holocene climate fluctuations, (2) inundation of habitats suitable for forest, (3) extensive water surfaces that served as barriers to terrestrial diaspore transport from refuge areas, (4) strong erosion that denuded the non-flooded upper slopes and (5) increasing human influence since the late glacial.
Resumo:
While equal political representation of all citizens is a fundamental democratic goal, it is hampered empirically in a multitude of ways. This study examines how the societal level of economic inequality affects the representation of relatively poor citizens by parties and governments. Using CSES survey data for citizens’ policy preferences and expert placements of political parties, empirical evidence is found that in economically more unequal societies, the party system represents the preferences of relatively poor citizens worse than in more equal societies. This moderating effect of economic equality is also found for policy congruence between citizens and governments, albeit slightly less clear-cut.
Resumo:
Purpose We hypothesized that reduced arousability (Richmond Agitation Sedation Scale, RASS, scores −2 to −3) for any reason during delirium assessment increases the apparent prevalence of delirium in intensive care patients. To test this hypothesis, we assessed delirium using the Confusion Assessment Method for the Intensive Care Unit (CAM-ICU) and Intensive Care Delirium Screening Checklist (ICDSC) in intensive care patients during sedation stops, and related the findings to the level of sedation, as assessed with RASS score. Methods We assessed delirium in 80 patients with ICU stay longer than 48 h using CAM-ICU and ICDSC during daily sedation stops. Sedation was assessed using RASS. The effect of including patients with a RASS of −2 and −3 during sedation stop (“light to moderate sedation”, eye contact less than 10 s or not at all, respectively) on prevalence of delirium was analyzed. Results A total of 467 patient days were assessed. The proportion of CAM-ICU-positive evaluations decreased from 53 to 31 % (p < 0.001) if assessments from patients at RASS −2/−3 (22 % of all assessments) were excluded. Similarly, the number of positive ICDSC results decreased from 51 to 29 % (p < 0.001). Conclusions Sedation per se can result in positive items of both CAM-ICU and ICDSC, and therefore in a diagnosis of delirium. Consequently, apparent prevalence of delirium is dependent on how a depressed level of consciousness after sedation stop is interpreted (delirium vs persisting sedation). We suggest that any reports on delirium using these assessment tools should be stratified for a sedation score during the assessment.
Resumo:
Previous syntheses on the effects of environmental conditions on the outcome of plant-plant interactions summarize results from pairwise studies. However, the upscaling to the community-level of such studies is problematic because of the existence of multiple species assemblages and species-specific responses to both the environmental conditions and the presence of neighbors. We conducted the first global synthesis of community-level studies from harsh environments, which included data from 71 alpine and 137 dryland communities to: (i) test how important are facilitative interactions as a driver of community structure, (ii) evaluate whether we can predict the frequency of positive plant-plant interactions across differing environmental conditions and habitats, and (iii) assess whether thresholds in the response of plant-plant interactions to environmental gradients exists between ``moderate'' and ``extreme'' environments. We also used those community-level studies performed across gradients of at least three points to evaluate how the average environmental conditions, the length of the gradient studied, and the number of points sampled across such gradient affect the form and strength of the facilitation-environmental conditions relationship. Over 25% of the species present were more spatially associated to nurse plants than expected by chance in both alpine and chyland areas, illustrating the high importance of positive plant-plant interactions for the maintenance of plant diversity in these environments. Facilitative interactions were more frequent, and more related to environmental conditions, in alpine than in dryland areas, perhaps because drylands are generally characterized by a larger variety of environmental stress factors and plant functional traits. The frequency of facilitative interactions in alpine communities peaked at 1000 mm of annual rainfall, and globally decreased with elevation. The frequency of positive interactions in dtyland communities decreased globally with water scarcity or temperature annual range. Positive facilitation-drought stress relationships are more likely in shorter regional gradients, but these relationships are obscured in regions with a greater species turnover or with complex environmental gradients. By showing the different climatic drivers and behaviors of plant-plant interactions in dryland and alpine areas, our results will improve predictions regarding the effect of facilitation on the assembly of plant communities and their response to changes in environmental conditions.
Resumo:
RATIONALE In biomedical journals authors sometimes use the standard error of the mean (SEM) for data description, which has been called inappropriate or incorrect. OBJECTIVE To assess the frequency of incorrect use of SEM in articles in three selected cardiovascular journals. METHODS AND RESULTS All original journal articles published in 2012 in Cardiovascular Research, Circulation: Heart Failure and Circulation Research were assessed by two assessors for inappropriate use of SEM when providing descriptive information of empirical data. We also assessed whether the authors state in the methods section that the SEM will be used for data description. Of 441 articles included in this survey, 64% (282 articles) contained at least one instance of incorrect use of the SEM, with two journals having a prevalence above 70% and "Circulation: Heart Failure" having the lowest value (27%). In 81% of articles with incorrect use of SEM, the authors had explicitly stated that they use the SEM for data description and in 89% SEM bars were also used instead of 95% confidence intervals. Basic science studies had a 7.4-fold higher level of inappropriate SEM use (74%) than clinical studies (10%). LIMITATIONS The selection of the three cardiovascular journals was based on a subjective initial impression of observing inappropriate SEM use. The observed results are not representative for all cardiovascular journals. CONCLUSION In three selected cardiovascular journals we found a high level of inappropriate SEM use and explicit methods statements to use it for data description, especially in basic science studies. To improve on this situation, these and other journals should provide clear instructions to authors on how to report descriptive information of empirical data.
Resumo:
BACKGROUND Recently, two simple clinical scores were published to predict survival in trauma patients. Both scores may successfully guide major trauma triage, but neither has been independently validated in a hospital setting. METHODS This is a cohort study with 30-day mortality as the primary outcome to validate two new trauma scores-Mechanism, Glasgow Coma Scale (GCS), Age, and Pressure (MGAP) score and GCS, Age and Pressure (GAP) score-using data from the UK Trauma Audit and Research Network. First, an assessment of discrimination, using the area under the receiver operating characteristic (ROC) curve, and calibration, comparing mortality rates with those originally published, were performed. Second, we calculated sensitivity, specificity, predictive values, and likelihood ratios for prognostic score performance. Third, we propose new cutoffs for the risk categories. RESULTS A total of 79,807 adult (≥16 years) major trauma patients (2000-2010) were included; 5,474 (6.9%) died. Mean (SD) age was 51.5 (22.4) years, median GCS score was 15 (interquartile range, 15-15), and median Injury Severity Score (ISS) was 9 (interquartile range, 9-16). More than 50% of the patients had a low-risk GAP or MGAP score (1% mortality). With regard to discrimination, areas under the ROC curve were 87.2% for GAP score (95% confidence interval, 86.7-87.7) and 86.8% for MGAP score (95% confidence interval, 86.2-87.3). With regard to calibration, 2,390 (3.3%), 1,900 (28.5%), and 1,184 (72.2%) patients died in the low, medium, and high GAP risk categories, respectively. In the low- and medium-risk groups, these were almost double the previously published rates. For MGAP, 1,861 (2.8%), 1,455 (15.2%), and 2,158 (58.6%) patients died in the low-, medium-, and high-risk categories, consonant with results originally published. Reclassifying score point cutoffs improved likelihood ratios, sensitivity and specificity, as well as areas under the ROC curve. CONCLUSION We found both scores to be valid triage tools to stratify emergency department patients, according to their risk of death. MGAP calibrated better, but GAP slightly improved discrimination. The newly proposed cutoffs better differentiate risk classification and may therefore facilitate hospital resource allocation. LEVEL OF EVIDENCE Prognostic study, level II.
Resumo:
BACKGROUND Acetabular fractures and surgical interventions used to treat them can result in nerve injuries. To date, only small case studies have tried to explore the frequency of nerve injuries and their association with patient and treatment characteristics. High-quality data on the risk of traumatic and iatrogenic nerve lesions and their epidemiology in relation to different fracture types and surgical approaches are lacking. QUESTIONS/PURPOSES The purpose of this study was to determine (1) the proportion of patients who develop nerve injuries after acetabular fracture; (2) which fracture type(s) are associated with increased nerve injury risk; and (3) which surgical approach was associated with the highest proportion of patients developing nerve injuries using data from the German Pelvic Trauma Registry. Two secondary aims were (4) to assess hospital volume-nerve-injury relationship; and (5) internal data validity. METHODS Between March 2001 and June 2012, 2236 patients with acetabular fractures were entered into a prospectively maintained registry from 29 hospitals; of those, 2073 (92.7%) had complete records on the endpoints of interest in this retrospective study and were analyzed. The neurological status in these patients was captured at their admission and at the discharge. A total of 1395 of 2073 (67%) patients underwent surgery, and the proportions of intervention-related and other hospital-acquired nerve injuries were obtained. Overall proportions of patients developing nerve injuries, risk based on fracture type, and risk of surgical approach type were analyzed. RESULTS The proportion of patients being diagnosed with nerve injuries at hospital admission was 4% (76 of 2073) and at discharge 7% (134 or 2073). Patients with fractures of the "posterior wall" (relative risk [RR], 2.0; 95% confidence interval [CI], 1.4-2.8; p=0.001), "posterior column and posterior wall" (RR, 2.9; CI, 1.6-5.0; p=0.002), and "transverse+posterior wall" fracture (RR, 2.1; CI, 1.3-3.5; p=0.010) were more likely to have nerve injuries at hospital discharge. The proportion of patients with intervention-related nerve injuries and that of patients with other hospital-acquired nerve injuries was 2% (24 of 1395 and 46 of 2073, respectively). They both were associated with the Kocher-Langenbeck approach (RR, 3.0; CI, 1.4-6.2; p=0.006; and RR, 2.4; CI, 1.4-4.3; p=0.004, respectively). CONCLUSIONS Acetabular fractures with the involvement of posterior wall were most commonly accompanied with nerve injuries. The data suggest also that Kocher-Langenbeck approach to the pelvic ring is associated with a higher risk of perioperative nerve injuries. Trauma surgeons should be aware of common nerve injuries, particularly in posterior wall fractures. The results of the study should help provide patients with more exact information on the risk of perioperative nerve injuries in acetabular fractures. LEVEL OF EVIDENCE Level III, therapeutic study. See Guidelines for Authors for a complete description of levels of evidence.
Resumo:
OBJECTIVE To evaluate antenatal surveillance strategies and the optimal timing of delivery for monoamniotic twin pregnancies. METHODS Obstetric and perinatal outcomes were retrospectively retrieved for 193 monoamniotic twin pregnancies. Fetal and neonatal outcomes were compared between fetuses followed in an inpatient setting and those undergoing intensive outpatient follow-up from 26 to 28 weeks of gestation until planned cesarean delivery between 32 and 35 weeks of gestation. The risk of fetal death was compared with the risk of neonatal complications. RESULTS Fetal deaths occurred in 18.1% of fetuses (70/386). Two hundred ninety-five neonates from 153 pregnancies were born alive after 23 weeks of gestation. There were 17 neonatal deaths (5.8%), five of whom had major congenital anomalies. The prospective risk of a nonrespiratory neonatal complication was lower than the prospective risk of fetal death after 32 4/7 weeks of gestation (95% confidence interval 32 0/7-33 4/7). The incidence of death or a nonrespiratory neonatal complication was not significantly different between fetuses managed as outpatients (14/106 [13.2%]) or inpatients (15/142 [10.5%]; P=.55). Our statistical power to detect a difference in outcomes between these groups was low. CONCLUSIONS The in utero risk of a monoamniotic twin fetus exceeds the risk of a postnatal nonrespiratory complication at 32 4/7 weeks of gestation. If close fetal surveillance is instituted after 26-28 weeks of gestation and delivery takes place at approximately 33 weeks of gestation, the risk of fetal or neonatal death is low, no matter the surveillance setting. LEVEL OF EVIDENCE II.