997 resultados para GLAUCOMA PROBABILITY SCORE


Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND AND PURPOSE Previous studies have suggested that advanced age predicts worse outcome following mechanical thrombectomy. We assessed outcomes from 2 recent large prospective studies to determine the association among TICI, age, and outcome. MATERIALS AND METHODS Data from the Solitaire FR Thrombectomy for Acute Revascularization (STAR) trial, an international multicenter prospective single-arm thrombectomy study and the Solitaire arm of the Solitaire FR With the Intention For Thrombectomy (SWIFT) trial were pooled. TICI was determined by core laboratory review. Good outcome was defined as an mRS score of 0-2 at 90 days. We analyzed the association among clinical outcome, successful-versus-unsuccessful reperfusion (TICI 2b-3 versus TICI 0-2a), and age (dichotomized across the median). RESULTS Two hundred sixty-nine of 291 patients treated with Solitaire in the STAR and SWIFT data bases for whom TICI and 90-day outcome data were available were included. The median age was 70 years (interquartile range, 60-76 years) with an age range of 25-88 years. The mean age of patients 70 years of age or younger was 59 years, and it was 77 years for patients older than 70 years. There was no significant difference between baseline NIHSS scores or procedure time metrics. Hemorrhage and device-related complications were more common in the younger age group but did not reach statistical significance. In absolute terms, the rate of good outcome was higher in the younger population (64% versus 44%, P < .001). However, the magnitude of benefit from successful reperfusion was higher in the 70 years of age and older group (OR, 4.82; 95% CI, 1.32-17.63 versus OR 7.32; 95% CI, 1.73-30.99). CONCLUSIONS Successful reperfusion is the strongest predictor of good outcome following mechanical thrombectomy, and the magnitude of benefit is highest in the patient population older than 70 years of age.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

OBJECTIVE In patients with a long life expectancy with high-risk (HR) prostate cancer (PCa), the chance to die from PCa is not negligible and may change significantly according to the time elapsed from surgery. The aim of this study was to evaluate long-term survival patterns in young patients treated with radical prostatectomy (RP) for HRPCa. MATERIALS AND METHODS Within a multiinstitutional cohort, 600 young patients (≤59 years) treated with RP between 1987 and 2012 for HRPCa (defined as at least one of the following adverse characteristics: prostate specific antigen>20, cT3 or higher, biopsy Gleason sum 8-10) were identified. Smoothed cumulative incidence plot was performed to assess cancer-specific mortality (CSM) and other cause mortality (OCM) rates at 10, 15, and 20 years after RP. The same analyses were performed to assess the 5-year probability of CSM and OCM in patients who survived 5, 10, and 15 years after RP. A multivariable competing risk regression model was fitted to identify predictors of CSM and OCM. RESULTS The 10-, 15- and 20-year CSM and OCM rates were 11.6% and 5.5% vs. 15.5% and 13.5% vs. 18.4% and 19.3%, respectively. The 5-year probability of CSM and OCM rates among patients who survived at 5, 10, and 15 years after RP, were 6.4% and 2.7% vs. 4.6% and 9.6% vs. 4.2% and 8.2%, respectively. Year of surgery, pathological stage and Gleason score, surgical margin status and lymph node invasion were the major determinants of CSM (all P≤0.03). Conversely, none of the covariates was significantly associated with OCM (all P≥ 0.09). CONCLUSIONS Very long-term cancer control in young high-risk patients after RP is highly satisfactory. The probability of dying from PCa in young patients is the leading cause of death during the first 10 years of survivorship after RP. Thereafter, mortality not related to PCa became the main cause of death. Consequently, surgery should be consider among young patients with high-risk disease and strict PCa follow-up should enforce during the first 10 years of survivorship after RP.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

AIMS The preferred antithrombotic strategy for secondary prevention in patients with cryptogenic stroke (CS) and patent foramen ovale (PFO) is unknown. We pooled multiple observational studies and used propensity score-based methods to estimate the comparative effectiveness of oral anticoagulation (OAC) compared with antiplatelet therapy (APT). METHODS AND RESULTS Individual participant data from 12 databases of medically treated patients with CS and PFO were analysed with Cox regression models, to estimate database-specific hazard ratios (HRs) comparing OAC with APT, for both the primary composite outcome [recurrent stroke, transient ischaemic attack (TIA), or death] and stroke alone. Propensity scores were applied via inverse probability of treatment weighting to control for confounding. We synthesized database-specific HRs using random-effects meta-analysis models. This analysis included 2385 (OAC = 804 and APT = 1581) patients with 227 composite endpoints (stroke/TIA/death). The difference between OAC and APT was not statistically significant for the primary composite outcome [adjusted HR = 0.76, 95% confidence interval (CI) 0.52-1.12] or for the secondary outcome of stroke alone (adjusted HR = 0.75, 95% CI 0.44-1.27). Results were consistent in analyses applying alternative weighting schemes, with the exception that OAC had a statistically significant beneficial effect on the composite outcome in analyses standardized to the patient population who actually received APT (adjusted HR = 0.64, 95% CI 0.42-0.99). Subgroup analyses did not detect statistically significant heterogeneity of treatment effects across clinically important patient groups. CONCLUSION We did not find a statistically significant difference comparing OAC with APT; our results justify randomized trials comparing different antithrombotic approaches in these patients.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

OBJECTIVES To assess the clinical profile and long-term mortality in SYNTAX score II based strata of patients who received percutaneous coronary interventions (PCI) in contemporary randomized trials. BACKGROUND The SYNTAX score II was developed in the randomized, all-comers' SYNTAX trial population and is composed by 2 anatomical and 6 clinical variables. The interaction of these variables with the treatment provides individual long-term mortality predictions if a patient undergoes coronary artery bypass grafting (CABG) or PCI. METHODS Patient-level (n=5433) data from 7 contemporary coronary drug-eluting stent (DES) trials were pooled. The mortality for CABG or PCI was estimated for every patient. The difference in mortality estimates for these two revascularization strategies was used to divide the patients into three groups of theoretical treatment recommendations: PCI, CABG or PCI/CABG (the latter means equipoise between CABG and PCI for long term mortality). RESULTS The three groups had marked differences in their baseline characteristics. According to the predicted risk differences, 5115 patients could be treated either by PCI or CABG, 271 should be treated only by PCI and, rarely, CABG (n=47) was recommended. At 3-year follow-up, according to the SYNTAX score II recommendations, patients recommended for CABG had higher mortality compared to the PCI and PCI/CABG groups (17.4%; 6.1% and 5.3%, respectively; P<0.01). CONCLUSIONS The SYNTAX score II demonstrated capability to help in stratifying PCI procedures.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

OBJECTIVE The purpose of this study was to investigate outcomes of patients treated with prasugrel or clopidogrel after percutaneous coronary intervention (PCI) in a nationwide acute coronary syndrome (ACS) registry. BACKGROUND Prasugrel was found to be superior to clopidogrel in a randomized trial of ACS patients undergoing PCI. However, little is known about its efficacy in everyday practice. METHODS All ACS patients enrolled in the Acute Myocardial Infarction in Switzerland (AMIS)-Plus registry undergoing PCI and being treated with a thienopyridine P2Y12 inhibitor between January 2010-December 2013 were included in this analysis. Patients were stratified according to treatment with prasugrel or clopidogrel and outcomes were compared using propensity score matching. The primary endpoint was a composite of death, recurrent infarction and stroke at hospital discharge. RESULTS Out of 7621 patients, 2891 received prasugrel (38%) and 4730 received clopidogrel (62%). Independent predictors of in-hospital mortality were age, Killip class >2, STEMI, Charlson comorbidity index >1, and resuscitation prior to admission. After propensity score matching (2301 patients per group), the primary endpoint was significantly lower in prasugrel-treated patients (3.0% vs 4.3%; p=0.022) while bleeding events were more frequent (4.1% vs 3.0%; p=0.048). In-hospital mortality was significantly reduced (1.8% vs 3.1%; p=0.004), but no significant differences were observed in rates of recurrent infarction (0.8% vs 0.7%; p=1.00) or stroke (0.5% vs 0.6%; p=0.85). In a predefined subset of matched patients with one-year follow-up (n=1226), mortality between discharge and one year was not significantly reduced in prasugrel-treated patients (1.3% vs 1.9%, p=0.38). CONCLUSIONS In everyday practice in Switzerland, prasugrel is predominantly used in younger patients with STEMI undergoing primary PCI. A propensity score-matched analysis suggests a mortality benefit from prasugrel compared with clopidogrel in these patients.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

PURPOSE To determine the predictive value of the vertebral trabecular bone score (TBS) alone or in addition to bone mineral density (BMD) with regard to fracture risk. METHODS Retrospective analysis of the relative contribution of BMD [measured at the femoral neck (FN), total hip (TH), and lumbar spine (LS)] and TBS with regard to the risk of incident clinical fractures in a representative cohort of elderly post-menopausal women previously participating in the Swiss Evaluation of the Methods of Measurement of Osteoporotic Fracture Risk study. RESULTS Complete datasets were available for 556 of 701 women (79 %). Mean age 76.1 years, LS BMD 0.863 g/cm(2), and TBS 1.195. LS BMD and LS TBS were moderately correlated (r (2) = 0.25). After a mean of 2.7 ± 0.8 years of follow-up, the incidence of fragility fractures was 9.4 %. Age- and BMI-adjusted hazard ratios per standard deviation decrease (95 % confidence intervals) were 1.58 (1.16-2.16), 1.77 (1.31-2.39), and 1.59 (1.21-2.09) for LS, FN, and TH BMD, respectively, and 2.01 (1.54-2.63) for TBS. Whereas 58 and 60 % of fragility fractures occurred in women with BMD T score ≤-2.5 and a TBS <1.150, respectively, combining these two thresholds identified 77 % of all women with an osteoporotic fracture. CONCLUSIONS Lumbar spine TBS alone or in combination with BMD predicted incident clinical fracture risk in a representative population-based sample of elderly post-menopausal women.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

OBJECTIVE To assess whether palliative primary tumor resection in colorectal cancer patients with incurable stage IV disease is associated with improved survival. BACKGROUND There is a heated debate regarding whether or not an asymptomatic primary tumor should be removed in patients with incurable stage IV colorectal disease. METHODS Stage IV colorectal cancer patients were identified in the Surveillance, Epidemiology, and End Results database between 1998 and 2009. Patients undergoing surgery to metastatic sites were excluded. Overall survival and cancer-specific survival were compared between patients with and without palliative primary tumor resection using risk-adjusted Cox proportional hazard regression models and stratified propensity score methods. RESULTS Overall, 37,793 stage IV colorectal cancer patients were identified. Of those, 23,004 (60.9%) underwent palliative primary tumor resection. The rate of patients undergoing palliative primary cancer resection decreased from 68.4% in 1998 to 50.7% in 2009 (P < 0.001). In Cox regression analysis after propensity score matching primary cancer resection was associated with a significantly improved overall survival [hazard ratio (HR) of death = 0.40, 95% confidence interval (CI) = 0.39-0.42, P < 0.001] and cancer-specific survival (HR of death = 0.39, 95% CI = 0.38-0.40, P < 0.001). The benefit of palliative primary cancer resection persisted during the time period 1998 to 2009 with HRs equal to or less than 0.47 for both overall and cancer-specific survival. CONCLUSIONS On the basis of this population-based cohort of stage IV colorectal cancer patients, palliative primary tumor resection was associated with improved overall and cancer-specific survival. Therefore, the dogma that an asymptomatic primary tumor never should be resected in patients with unresectable colorectal cancer metastases must be questioned.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND & AIMS Cirrhotic patients with acute decompensation frequently develop acute-on-chronic liver failure (ACLF), which is associated with high mortality rates. Recently, a specific score for these patients has been developed using the CANONIC study database. The aims of this study were to develop and validate the CLIF-C AD score, a specific prognostic score for hospitalised cirrhotic patients with acute decompensation (AD), but without ACLF, and to compare this with the Child-Pugh, MELD, and MELD-Na scores. METHODS The derivation set included 1016 CANONIC study patients without ACLF. Proportional hazards models considering liver transplantation as a competing risk were used to identify score parameters. Estimated coefficients were used as relative weights to compute the CLIF-C ADs. External validation was performed in 225 cirrhotic AD patients. CLIF-C ADs was also tested for sequential use. RESULTS Age, serum sodium, white-cell count, creatinine and INR were selected as the best predictors of mortality. The C-index for prediction of mortality was better for CLIF-C ADs compared with Child-Pugh, MELD, and MELD-Nas at predicting 3- and 12-month mortality in the derivation, internal validation and the external dataset. CLIF-C ADs improved in its ability to predict 3-month mortality using data from days 2, 3-7, and 8-15 (C-index: 0.72, 0.75, and 0.77 respectively). CONCLUSIONS The new CLIF-C ADs is more accurate than other liver scores in predicting prognosis in hospitalised cirrhotic patients without ACLF. CLIF-C ADs therefore may be used to identify a high-risk cohort for intensive management and a low-risk group that may be discharged early.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

OBJECTIVE We endeavored to develop an unruptured intracranial aneurysm (UIA) treatment score (UIATS) model that includes and quantifies key factors involved in clinical decision-making in the management of UIAs and to assess agreement for this model among specialists in UIA management and research. METHODS An international multidisciplinary (neurosurgery, neuroradiology, neurology, clinical epidemiology) group of 69 specialists was convened to develop and validate the UIATS model using a Delphi consensus. For internal (39 panel members involved in identification of relevant features) and external validation (30 independent external reviewers), 30 selected UIA cases were used to analyze agreement with UIATS management recommendations based on a 5-point Likert scale (5 indicating strong agreement). Interrater agreement (IRA) was assessed with standardized coefficients of dispersion (vr*) (vr* = 0 indicating excellent agreement and vr* = 1 indicating poor agreement). RESULTS The UIATS accounts for 29 key factors in UIA management. Agreement with UIATS (mean Likert scores) was 4.2 (95% confidence interval [CI] 4.1-4.3) per reviewer for both reviewer cohorts; agreement per case was 4.3 (95% CI 4.1-4.4) for panel members and 4.5 (95% CI 4.3-4.6) for external reviewers (p = 0.017). Mean Likert scores were 4.2 (95% CI 4.1-4.3) for interventional reviewers (n = 56) and 4.1 (95% CI 3.9-4.4) for noninterventional reviewers (n = 12) (p = 0.290). Overall IRA (vr*) for both cohorts was 0.026 (95% CI 0.019-0.033). CONCLUSIONS This novel UIA decision guidance study captures an excellent consensus among highly informed individuals on UIA management, irrespective of their underlying specialty. Clinicians can use the UIATS as a comprehensive mechanism for indicating how a large group of specialists might manage an individual patient with a UIA.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Introduction: According to the ecological view, coordination establishes byvirtueof social context. Affordances thought of as situational opportunities to interact are assumed to represent the guiding principles underlying decisions involved in interpersonal coordination. It’s generally agreed that affordances are not an objective part of the (social) environment but that they depend on the constructive perception of involved subjects. Theory and empirical data hold that cognitive operations enabling domain-specific efficacy beliefs are involved in the perception of affordances. The aim of the present study was to test the effects of these cognitive concepts in the subjective construction of local affordances and their influence on decision making in football. Methods: 71 football players (M = 24.3 years, SD = 3.3, 21 % women) from different divisions participated in the study. Participants were presented scenarios of offensive game situations. They were asked to take the perspective of the person on the ball and to indicate where they would pass the ball from within each situation. The participants stated their decisions in two conditions with different game score (1:0 vs. 0:1). The playing fields of all scenarios were then divided into ten zones. For each zone, participants were asked to rate their confidence in being able to pass the ball there (self-efficacy), the likelihood of the group staying in ball possession if the ball were passed into the zone (group-efficacy I), the likelihood of the ball being covered safely by a team member (pass control / group-efficacy II), and whether a pass would establish a better initial position to attack the opponents’ goal (offensive convenience). Answers were reported on visual analog scales ranging from 1 to 10. Data were analyzed specifying general linear models for binomially distributed data (Mplus). Maximum likelihood with non-normality robust standard errors was chosen to estimate parameters. Results: Analyses showed that zone- and domain-specific efficacy beliefs significantly affected passing decisions. Because of collinearity with self-efficacy and group-efficacy I, group-efficacy II was excluded from the models to ease interpretation of the results. Generally, zones with high values in the subjective ratings had a higher probability to be chosen as passing destination (βself-efficacy = 0.133, p < .001, OR = 1.142; βgroup-efficacy I = 0.128, p < .001, OR = 1.137; βoffensive convenience = 0.057, p < .01, OR = 1.059). There were, however, characteristic differences in the two score conditions. While group-efficacy I was the only significant predictor in condition 1 (βgroup-efficacy I = 0.379, p < .001), only self-efficacy and offensive convenience contributed to passing decisions in condition 2 (βself-efficacy = 0.135, p < .01; βoffensive convenience = 0.120, p < .001). Discussion: The results indicate that subjectively distinct attributes projected to playfield zones affect passing decisions. The study proposes a probabilistic alternative to Lewin’s (1951) hodological and deterministic field theory and enables insight into how dimensions of the psychological landscape afford passing behavior. Being part of a team, this psychological landscape is not only constituted by probabilities that refer to the potential and consequences of individual behavior, but also to that of the group system of which individuals are part of. Hence, in regulating action decisions in group settings, informers are extended to aspects referring to the group-level. References: Lewin, K. (1951). In D. Cartwright (Ed.), Field theory in social sciences: Selected theoretical papers by Kurt Lewin. New York: Harper & Brothers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: The efficacy of cognitive behavioral therapy (CBT) for the treatment of depressive disorders has been demonstrated in many randomized controlled trials (RCTs). This study investigated whether for CBT similar effects can be expected under routine care conditions when the patients are comparable to those examined in RCTs. Method: N=574 CBT patients from an outpatient clinic were stepwise matched to the patients undergoing CBT in the National Institute of Mental Health Treatment of Depression Collaborative Research Program (TDCRP). First, the exclusion criteria of the RCT were applied to the naturalistic sample of the outpatient clinic. Second, propensity score matching (PSM) was used to adjust the remaining naturalistic sample on the basis of baseline covariate distributions. Matched samples were then compared regarding treatment effects using effect sizes, average treatment effect on the treated (ATT) and recovery rates. Results: CBT in the adjusted naturalistic subsample was as effective as in the RCT. However, treatments lasted significantly longer under routine care conditions. Limitations: The samples included only a limited amount of common predictor variables and stemmed from different countries. There might be additional covariates, which could potentially further improve the matching between the samples. Conclusions: CBT for depression in clinical practice might be equally effective as manual-based treatments in RCTs when they are applied to comparable patients. The fact that similar effects under routine conditions were reached with more sessions, however, points to the potential to optimize treatments in clinical practice with respect to their efficiency.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

OBJECTIVES To longitudinally map the onset and identify risk factors for skin sclerosis and digital ulcers (DUs) in patients with systemic sclerosis (SSc) from an early time point after the onset of Raynaud's phenomenon (RP) in the European Scleroderma Trials and Research (EUSTAR) cohort. METHODS 695 patients with SSc with a baseline visit within 1 year after RP onset were followed in the prospective multinational EUSTAR database. During the 10-year observation period, cumulative probabilities of cutaneous lesions were assessed with the Kaplan-Meier method. Cox proportional hazards regression analysis was used to evaluate risk factors. RESULTS The median modified Rodnan skin score (mRSS) peaked 1 year after RP onset, and was 15 points. The 1-year probability to develop an mRSS ≥2 in at least one area of the arms and legs was 69% and 25%, respectively. Twenty-five per cent of patients developed diffuse cutaneous involvement in the first year after RP onset. This probability increased to 36% during the subsequent 2 years. Only 6% of patients developed diffuse cutaneous SSc thereafter. The probability to develop DUs increased to a maximum of 70% at the end of the 10-year observation. The main factors associated with diffuse cutaneous SSc were the presence of anti-RNA polymerase III autoantibodies, followed by antitopoisomerase autoantibodies and male sex. The main factor associated with incident DUs was the presence of antitopoisomerase autoantibodies. CONCLUSION Early after RP onset, cutaneous manifestations exhibit rapid kinetics in SSc. This should be accounted for in clinical trials aiming to prevent skin worsening.