807 resultados para score validity


Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND: Clinical disorders often share common symptoms and aetiological factors. Bifactor models acknowledge the role of an underlying general distress component and more specific sub-domains of psychopathology which specify the unique components of disorders over and above a general factor. METHODS: A bifactor model jointly calibrated data on subjective distress from The Mood and Feelings Questionnaire and the Revised Children's Manifest Anxiety Scale. The bifactor model encompassed a general distress factor, and specific factors for (a) hopelessness-suicidal ideation, (b) generalised worrying and (c) restlessness-fatigue at age 14 which were related to lifetime clinical diagnoses established by interviews at ages 14 (concurrent validity) and current diagnoses at 17 years (predictive validity) in a British population sample of 1159 adolescents. RESULTS: Diagnostic interviews confirmed the validity of a symptom-level bifactor model. The underlying general distress factor was a powerful but non-specific predictor of affective, anxiety and behaviour disorders. The specific factors for hopelessness-suicidal ideation and generalised worrying contributed to predictive specificity. Hopelessness-suicidal ideation predicted concurrent and future affective disorder; generalised worrying predicted concurrent and future anxiety, specifically concurrent generalised anxiety disorders. Generalised worrying was negatively associated with behaviour disorders. LIMITATIONS: The analyses of gender differences and the prediction of specific disorders was limited due to a low frequency of disorders other than depression. CONCLUSIONS: The bifactor model was able to differentiate concurrent and predict future clinical diagnoses. This can inform the development of targeted as well as non-specific interventions for prevention and treatment of different disorders.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

PURPOSE Rapid assessment and intervention is important for the prognosis of acutely ill patients admitted to the emergency department (ED). The aim of this study was to prospectively develop and validate a model predicting the risk of in-hospital death based on all available information available at the time of ED admission and to compare its discriminative performance with a non-systematic risk estimate by the triaging first health-care provider. METHODS Prospective cohort analysis based on a multivariable logistic regression for the probability of death. RESULTS A total of 8,607 consecutive admissions of 7,680 patients admitted to the ED of a tertiary care hospital were analysed. Most frequent APACHE II diagnostic categories at the time of admission were neurological (2,052, 24 %), trauma (1,522, 18 %), infection categories [1,328, 15 %; including sepsis (357, 4.1 %), severe sepsis (249, 2.9 %), septic shock (27, 0.3 %)], cardiovascular (1,022, 12 %), gastrointestinal (848, 10 %) and respiratory (449, 5 %). The predictors of the final model were age, prolonged capillary refill time, blood pressure, mechanical ventilation, oxygen saturation index, Glasgow coma score and APACHE II diagnostic category. The model showed good discriminative ability, with an area under the receiver operating characteristic curve of 0.92 and good internal validity. The model performed significantly better than non-systematic triaging of the patient. CONCLUSIONS The use of the prediction model can facilitate the identification of ED patients with higher mortality risk. The model performs better than a non-systematic assessment and may facilitate more rapid identification and commencement of treatment of patients at risk of an unfavourable outcome.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

It has been widely accepted for some time that species-appropriate environmental enrichment is important for the welfare of research animals, but its impact on research data initially received little attention. This has now changed, as the use of enrichment as one element of routine husbandry has expanded. In addition to its use in the care of larger research animals, such as nonhuman primates, it is now being used to improve the environments of small research animals, such as rodents, which are used in significantly greater numbers and in a wide variety of studies. Concern has been expressed that enrichment negatively affects both experimental validity and reproducibility. However, when a concise definition of enrichment is used, with a sound understanding of the biology and behaviour of the animal as well as the research constraints, it becomes clear that the welfare of research animals can be enhanced through environmental enrichment without compromising their purpose. Indeed, it is shown that the converse is true: the provision of suitable enrichment enhances the well-being of the animal, thereby refining the animal model and improving the research data. Thus, the argument is made that both the validity and reproducibility of the research are enhanced when proper consideration is given to the research animal's living environment and the animal's opportunities to express species-typical behaviours.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Trabecular bone score (TBS) rests on the textural analysis of DXA to reflect the decay in trabecular structure characterising osteoporosis. Yet, its discriminative power in fracture studies remains incomprehensible as prior biomechanical tests found no correlation with vertebral strength. To verify this result possibly due to an unrealistic set-up and to cover a wide range of loading scenarios, the data from three previous biomechanical studies using different experimental settings was used. They involved the compressive failure of 62 human lumbar vertebrae loaded 1) via intervertebral discs to mimic the in vivo situation (“full vertebra”), 2) via the classical endplate embedding (“vertebral body”) or 3) via a ball joint to induce anterior wedge failure (“vertebral section”). HR-pQCT scans acquired prior testing were used to simulate anterior-posterior DXA from which areal bone mineral density (aBMD) and the initial slope of the variogram (ISV), the early definition of TBS, were evaluated. Finally, the relation of aBMD and ISV with failure load (Fexp) and apparent failure stress (σexp) was assessed and their relative contribution to a multi-linear model was quantified via ANOVA. We found that, unlike aBMD, ISV did not significantly correlate with Fexp and σexp, except for the “vertebral body” case (r2 = 0.396, p = 0.028). Aside from the “vertebra section” set-up where it explained only 6.4% of σexp (p = 0.037), it brought no significant improvement to aBMD. These results indicate that ISV, a replica of TBS, is a poor surrogate for vertebral strength no matter the testing set-up, which supports the prior observations and raises a fortiori the question of the deterministic factors underlying the statistical relationship between TBS and vertebral fracture risk.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective The validity of current ultra-high risk (UHR) criteria is under-examined in help-seeking minors, particularly, in children below the age of 12 years. Thus, the present study investigated predictors of one-year outcome in children and adolescents (CAD) with UHR status. Method Thirty-five children and adolescents (age 9–17 years) meeting UHR criteria according to the Structured Interview for Psychosis-Risk Syndromes were followed-up for 12 months. Regression analyses were employed to detect baseline predictors of conversion to psychosis and of outcome of non-converters (remission and persistence of UHR versus conversion). Results At one-year follow-up, 20% of patients had developed schizophrenia, 25.7% had remitted from their UHR status that, consequently, had persisted in 54.3%. No patient had fully remitted from mental disorders, even if UHR status was not maintained. Conversion was best predicted by any transient psychotic symptom and a disorganized communication score. No prediction model for outcome beyond conversion was identified. Conclusions Our findings provide the first evidence for the predictive utility of UHR criteria in CAD in terms of brief intermittent psychotic symptoms (BIPS) when accompanied by signs of cognitive impairment, i.e. disorganized communication. However, because attenuated psychotic symptoms (APS) related to thought content and perception were indicative of non-conversion at 1-year follow-up, their use in early detection of psychosis in CAD needs further study. Overall, the need for more in-depth studies into developmental peculiarities in the early detection and treatment of psychoses with an onset of illness in childhood and early adolescence was further highlighted.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND AND PURPOSE Previous studies have suggested that advanced age predicts worse outcome following mechanical thrombectomy. We assessed outcomes from 2 recent large prospective studies to determine the association among TICI, age, and outcome. MATERIALS AND METHODS Data from the Solitaire FR Thrombectomy for Acute Revascularization (STAR) trial, an international multicenter prospective single-arm thrombectomy study and the Solitaire arm of the Solitaire FR With the Intention For Thrombectomy (SWIFT) trial were pooled. TICI was determined by core laboratory review. Good outcome was defined as an mRS score of 0-2 at 90 days. We analyzed the association among clinical outcome, successful-versus-unsuccessful reperfusion (TICI 2b-3 versus TICI 0-2a), and age (dichotomized across the median). RESULTS Two hundred sixty-nine of 291 patients treated with Solitaire in the STAR and SWIFT data bases for whom TICI and 90-day outcome data were available were included. The median age was 70 years (interquartile range, 60-76 years) with an age range of 25-88 years. The mean age of patients 70 years of age or younger was 59 years, and it was 77 years for patients older than 70 years. There was no significant difference between baseline NIHSS scores or procedure time metrics. Hemorrhage and device-related complications were more common in the younger age group but did not reach statistical significance. In absolute terms, the rate of good outcome was higher in the younger population (64% versus 44%, P < .001). However, the magnitude of benefit from successful reperfusion was higher in the 70 years of age and older group (OR, 4.82; 95% CI, 1.32-17.63 versus OR 7.32; 95% CI, 1.73-30.99). CONCLUSIONS Successful reperfusion is the strongest predictor of good outcome following mechanical thrombectomy, and the magnitude of benefit is highest in the patient population older than 70 years of age.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND Current reporting guidelines do not call for standardised declaration of follow-up completeness, although study validity depends on the representativeness of measured outcomes. The Follow-Up Index (FUI) describes follow-up completeness at a given study end date as ratio between the investigated and the potential follow-up period. The association between FUI and the accuracy of survival-estimates was investigated. METHODS FUI and Kaplan-Meier estimates were calculated twice for 1207 consecutive patients undergoing aortic repair during an 11-year period: in a scenario A the population's clinical routine follow-up data (available from a prospective registry) was analysed conventionally. For the control scenario B, an independent survey was completed at the predefined study end. To determine the relation between FUI and the accuracy of study findings, discrepancies between scenarios regarding FUI, follow-up duration and cumulative survival-estimates were evaluated using multivariate analyses. RESULTS Scenario A noted 89 deaths (7.4%) during a mean considered follow-up of 30±28months. Scenario B, although analysing the same study period, detected 304 deaths (25.2%, P<0.001) as it scrutinized the complete follow-up period (49±32months). FUI (0.57±0.35 versus 1.00±0, P<0.001) and cumulative survival estimates (78.7% versus 50.7%, P<0.001) differed significantly between scenarios, suggesting that incomplete follow-up information led to underestimation of mortality. Degree of follow-up completeness (i.e. FUI-quartiles and FUI-intervals) correlated directly with accuracy of study findings: underestimation of long-term mortality increased almost linearly by 30% with every 0.1 drop in FUI (adjusted HR 1.30; 95%-CI 1.24;1.36, P<0.001). CONCLUSION Follow-up completeness is a pre-requisite for reliable outcome assessment and should be declared systematically. FUI represents a simple measure suited as reporting standard. Evidence lacking such information must be challenged as potentially flawed by selection bias.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

OBJECTIVE The purpose of this study was to investigate outcomes of patients treated with prasugrel or clopidogrel after percutaneous coronary intervention (PCI) in a nationwide acute coronary syndrome (ACS) registry. BACKGROUND Prasugrel was found to be superior to clopidogrel in a randomized trial of ACS patients undergoing PCI. However, little is known about its efficacy in everyday practice. METHODS All ACS patients enrolled in the Acute Myocardial Infarction in Switzerland (AMIS)-Plus registry undergoing PCI and being treated with a thienopyridine P2Y12 inhibitor between January 2010-December 2013 were included in this analysis. Patients were stratified according to treatment with prasugrel or clopidogrel and outcomes were compared using propensity score matching. The primary endpoint was a composite of death, recurrent infarction and stroke at hospital discharge. RESULTS Out of 7621 patients, 2891 received prasugrel (38%) and 4730 received clopidogrel (62%). Independent predictors of in-hospital mortality were age, Killip class >2, STEMI, Charlson comorbidity index >1, and resuscitation prior to admission. After propensity score matching (2301 patients per group), the primary endpoint was significantly lower in prasugrel-treated patients (3.0% vs 4.3%; p=0.022) while bleeding events were more frequent (4.1% vs 3.0%; p=0.048). In-hospital mortality was significantly reduced (1.8% vs 3.1%; p=0.004), but no significant differences were observed in rates of recurrent infarction (0.8% vs 0.7%; p=1.00) or stroke (0.5% vs 0.6%; p=0.85). In a predefined subset of matched patients with one-year follow-up (n=1226), mortality between discharge and one year was not significantly reduced in prasugrel-treated patients (1.3% vs 1.9%, p=0.38). CONCLUSIONS In everyday practice in Switzerland, prasugrel is predominantly used in younger patients with STEMI undergoing primary PCI. A propensity score-matched analysis suggests a mortality benefit from prasugrel compared with clopidogrel in these patients.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

PURPOSE To determine the predictive value of the vertebral trabecular bone score (TBS) alone or in addition to bone mineral density (BMD) with regard to fracture risk. METHODS Retrospective analysis of the relative contribution of BMD [measured at the femoral neck (FN), total hip (TH), and lumbar spine (LS)] and TBS with regard to the risk of incident clinical fractures in a representative cohort of elderly post-menopausal women previously participating in the Swiss Evaluation of the Methods of Measurement of Osteoporotic Fracture Risk study. RESULTS Complete datasets were available for 556 of 701 women (79 %). Mean age 76.1 years, LS BMD 0.863 g/cm(2), and TBS 1.195. LS BMD and LS TBS were moderately correlated (r (2) = 0.25). After a mean of 2.7 ± 0.8 years of follow-up, the incidence of fragility fractures was 9.4 %. Age- and BMI-adjusted hazard ratios per standard deviation decrease (95 % confidence intervals) were 1.58 (1.16-2.16), 1.77 (1.31-2.39), and 1.59 (1.21-2.09) for LS, FN, and TH BMD, respectively, and 2.01 (1.54-2.63) for TBS. Whereas 58 and 60 % of fragility fractures occurred in women with BMD T score ≤-2.5 and a TBS <1.150, respectively, combining these two thresholds identified 77 % of all women with an osteoporotic fracture. CONCLUSIONS Lumbar spine TBS alone or in combination with BMD predicted incident clinical fracture risk in a representative population-based sample of elderly post-menopausal women.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Trabecular bone score (TBS) is a grey-level textural index of bone microarchitecture derived from lumbar spine dual-energy X-ray absorptiometry (DXA) images. TBS is a BMD-independent predictor of fracture risk. The objective of this meta-analysis was to determine whether TBS predicted fracture risk independently of FRAX probability and to examine their combined performance by adjusting the FRAX probability for TBS. We utilized individual level data from 17,809 men and women in 14 prospective population-based cohorts. Baseline evaluation included TBS and the FRAX risk variables and outcomes during follow up (mean 6.7 years) comprised major osteoporotic fractures. The association between TBS, FRAX probabilities and the risk of fracture was examined using an extension of the Poisson regression model in each cohort and for each sex and expressed as the gradient of risk (GR; hazard ratio per 1SD change in risk variable in direction of increased risk). FRAX probabilities were adjusted for TBS using an adjustment factor derived from an independent cohort (the Manitoba Bone Density Cohort). Overall, the GR of TBS for major osteoporotic fracture was 1.44 (95% CI: 1.35-1.53) when adjusted for age and time since baseline and was similar in men and women (p > 0.10). When additionally adjusted for FRAX 10-year probability of major osteoporotic fracture, TBS remained a significant, independent predictor for fracture (GR 1.32, 95%CI: 1.24-1.41). The adjustment of FRAX probability for TBS resulted in a small increase in the GR (1.76, 95%CI: 1.65, 1.87 vs. 1.70, 95%CI: 1.60-1.81). A smaller change in GR for hip fracture was observed (FRAX hip fracture probability GR 2.25 vs. 2.22). TBS is a significant predictor of fracture risk independently of FRAX. The findings support the use of TBS as a potential adjustment for FRAX probability, though the impact of the adjustment remains to be determined in the context of clinical assessment guidelines. This article is protected by copyright. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND A single non-invasive gene expression profiling (GEP) test (AlloMap®) is often used to discriminate if a heart transplant recipient is at a low risk of acute cellular rejection at time of testing. In a randomized trial, use of the test (a GEP score from 0-40) has been shown to be non-inferior to a routine endomyocardial biopsy for surveillance after heart transplantation in selected low-risk patients with respect to clinical outcomes. Recently, it was suggested that the within-patient variability of consecutive GEP scores may be used to independently predict future clinical events; however, future studies were recommended. Here we performed an analysis of an independent patient population to determine the prognostic utility of within-patient variability of GEP scores in predicting future clinical events. METHODS We defined the GEP score variability as the standard deviation of four GEP scores collected ≥315 days post-transplantation. Of the 737 patients from the Cardiac Allograft Rejection Gene Expression Observational (CARGO) II trial, 36 were assigned to the composite event group (death, re-transplantation or graft failure ≥315 days post-transplantation and within 3 years of the final GEP test) and 55 were assigned to the control group (non-event patients). In this case-controlled study, the performance of GEP score variability to predict future events was evaluated by the area under the receiver operator characteristics curve (AUC ROC). The negative predictive values (NPV) and positive predictive values (PPV) including 95 % confidence intervals (CI) of GEP score variability were calculated. RESULTS The estimated prevalence of events was 17 %. Events occurred at a median of 391 (inter-quartile range 376) days after the final GEP test. The GEP variability AUC ROC for the prediction of a composite event was 0.72 (95 % CI 0.6-0.8). The NPV for GEP score variability of 0.6 was 97 % (95 % CI 91.4-100.0); the PPV for GEP score variability of 1.5 was 35.4 % (95 % CI 13.5-75.8). CONCLUSION In heart transplant recipients, a GEP score variability may be used to predict the probability that a composite event will occur within 3 years after the last GEP score. TRIAL REGISTRATION Clinicaltrials.gov identifier NCT00761787.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

OBJECTIVE To assess whether palliative primary tumor resection in colorectal cancer patients with incurable stage IV disease is associated with improved survival. BACKGROUND There is a heated debate regarding whether or not an asymptomatic primary tumor should be removed in patients with incurable stage IV colorectal disease. METHODS Stage IV colorectal cancer patients were identified in the Surveillance, Epidemiology, and End Results database between 1998 and 2009. Patients undergoing surgery to metastatic sites were excluded. Overall survival and cancer-specific survival were compared between patients with and without palliative primary tumor resection using risk-adjusted Cox proportional hazard regression models and stratified propensity score methods. RESULTS Overall, 37,793 stage IV colorectal cancer patients were identified. Of those, 23,004 (60.9%) underwent palliative primary tumor resection. The rate of patients undergoing palliative primary cancer resection decreased from 68.4% in 1998 to 50.7% in 2009 (P < 0.001). In Cox regression analysis after propensity score matching primary cancer resection was associated with a significantly improved overall survival [hazard ratio (HR) of death = 0.40, 95% confidence interval (CI) = 0.39-0.42, P < 0.001] and cancer-specific survival (HR of death = 0.39, 95% CI = 0.38-0.40, P < 0.001). The benefit of palliative primary cancer resection persisted during the time period 1998 to 2009 with HRs equal to or less than 0.47 for both overall and cancer-specific survival. CONCLUSIONS On the basis of this population-based cohort of stage IV colorectal cancer patients, palliative primary tumor resection was associated with improved overall and cancer-specific survival. Therefore, the dogma that an asymptomatic primary tumor never should be resected in patients with unresectable colorectal cancer metastases must be questioned.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND & AIMS Cirrhotic patients with acute decompensation frequently develop acute-on-chronic liver failure (ACLF), which is associated with high mortality rates. Recently, a specific score for these patients has been developed using the CANONIC study database. The aims of this study were to develop and validate the CLIF-C AD score, a specific prognostic score for hospitalised cirrhotic patients with acute decompensation (AD), but without ACLF, and to compare this with the Child-Pugh, MELD, and MELD-Na scores. METHODS The derivation set included 1016 CANONIC study patients without ACLF. Proportional hazards models considering liver transplantation as a competing risk were used to identify score parameters. Estimated coefficients were used as relative weights to compute the CLIF-C ADs. External validation was performed in 225 cirrhotic AD patients. CLIF-C ADs was also tested for sequential use. RESULTS Age, serum sodium, white-cell count, creatinine and INR were selected as the best predictors of mortality. The C-index for prediction of mortality was better for CLIF-C ADs compared with Child-Pugh, MELD, and MELD-Nas at predicting 3- and 12-month mortality in the derivation, internal validation and the external dataset. CLIF-C ADs improved in its ability to predict 3-month mortality using data from days 2, 3-7, and 8-15 (C-index: 0.72, 0.75, and 0.77 respectively). CONCLUSIONS The new CLIF-C ADs is more accurate than other liver scores in predicting prognosis in hospitalised cirrhotic patients without ACLF. CLIF-C ADs therefore may be used to identify a high-risk cohort for intensive management and a low-risk group that may be discharged early.