973 resultados para CLINICAL PREDICTORS
Resumo:
Both, underuse and overuse of thromboprophylaxis in hospitalised medical patients is common. We aimed to explore clinical factors associated with the use of pharmacological or mechanical thromboprophylaxis in acutely ill medical patients at high (Geneva Risk Score ≥ 3 points) vs low (Geneva Risk Score < 3 points) risk of venous thromboembolism. Overall, 1,478 hospitalised medical patients from eight large Swiss hospitals were enrolled in the prospective Explicit ASsessment of Thromboembolic RIsk and Prophylaxis for Medical PATients in SwitzErland (ESTIMATE) cohort study. The study is registered on ClinicalTrials.gov, number NCT01277536. Thromboprophylaxis increased stepwise with increasing Geneva Risk Score (p< 0.001). Among the 962 high-risk patients, 366 (38 %) received no thromboprophylaxis; cancer-associated thrombocytopenia (OR 4.78, 95 % CI 2.75-8.31, p< 0.001), active bleeding on admission (OR 2.88, 95 % CI 1.69-4.92, p< 0.001), and thrombocytopenia without cancer (OR 2.54, 95 % CI 1.31-4.95, p=0.006) were independently associated with the absence of prophylaxis. The use of thromboprophylaxis declined with increasing severity of thrombocytopenia (p=0.001). Among the 516 low-risk patients, 245 (48 %) received thromboprophylaxis; none of the investigated clinical factors predicted its use. In conclusion, in acutely ill medical patients, bleeding and thrombocytopenia were the most important factors for the absence of thromboprophylaxis among high-risk patients. The use of thromboprophylaxis among low-risk patients was inconsistent, without clearly identifiable predictors, and should be addressed in further research.
Resumo:
BACKGROUND Oesophageal adenocarcinoma or Barrett's adenocarcinoma (EAC) is increasing in incidence and stratification of prognosis might improve disease management. Multi-colour fluorescence in situ hybridisation (FISH) investigating ERBB2, MYC, CDKN2A and ZNF217 has recently shown promising results for the diagnosis of dysplasia and cancer using cytological samples. METHODS To identify markers of prognosis we targeted four selected gene loci using multi-colour FISH applied to a tissue microarray containing 130 EAC samples. Prognostic predictors (P1, P2, P3) based on genomic copy numbers of the four loci were statistically assessed to stratify patients according to overall survival in combination with clinical data. RESULTS The best stratification into favourable and unfavourable prognoses was shown by P1, percentage of cells with less than two ZNF217 signals; P2, percentage of cells with fewer ERBB2- than ZNF217 signals; and P3, overall ratio of ERBB2-/ZNF217 signals. Median survival times for P1 were 32 vs 73 months, 28 vs 73 months for P2; and 27 vs 65 months for P3. Regarding each tumour grade P2 subdivided patients into distinct prognostic groups independently within each grade, with different median survival times of at least 35 months. CONCLUSIONS Cell signal number of the ERBB2 and ZNF217 loci showed independence from tumour stage and differentiation grade. The prognostic value of multi-colour FISH-assays is applicable to EAC and is superior to single markers.
Resumo:
Post-traumatic sleep-wake disturbances are common after acute traumatic brain injury. Increased sleep need per 24 h and excessive daytime sleepiness are among the most prevalent post-traumatic sleep disorders and impair quality of life of trauma patients. Nevertheless, the relation between traumatic brain injury and sleep outcome, but also the link between post-traumatic sleep problems and clinical measures in the acute phase after traumatic brain injury has so far not been addressed in a controlled and prospective approach. We therefore performed a prospective controlled clinical study to examine (i) sleep-wake outcome after traumatic brain injury; and (ii) to screen for clinical and laboratory predictors of poor sleep-wake outcome after acute traumatic brain injury. Forty-two of 60 included patients with first-ever traumatic brain injury were available for follow-up examinations. Six months after trauma, the average sleep need per 24 h as assessed by actigraphy was markedly increased in patients as compared to controls (8.3 ± 1.1 h versus 7.1 ± 0.8 h, P < 0.0001). Objective daytime sleepiness was found in 57% of trauma patients and 19% of healthy subjects, and the average sleep latency in patients was reduced to 8.7 ± 4.6 min (12.1 ± 4.7 min in controls, P = 0.0009). Patients, but not controls, markedly underestimated both excessive sleep need and excessive daytime sleepiness when assessed only by subjective means, emphasizing the unreliability of self-assessment of increased sleep propensity in traumatic brain injury patients. At polysomnography, slow wave sleep after traumatic brain injury was more consolidated. The most important risk factor for developing increased sleep need after traumatic brain injury was the presence of an intracranial haemorrhage. In conclusion, we provide controlled and objective evidence for a direct relation between sleep-wake disturbances and traumatic brain injury, and for clinically significant underestimation of post-traumatic sleep-wake disturbances by trauma patients.
Resumo:
BACKGROUND Despite substantial evidence supporting a pharmacogenetic approach to warfarin therapy in adults, evidence on the importance of genetics in warfarin therapy in children is limited, particularly for clinical outcomes. We assessed the contribution of CYP2C9/VKORC1/CYP4F2 genotypes and variation in other genes involved in vitamin K and coagulation pathways to warfarin dose and related clinical outcomes in children. PROCEDURE Clinical and genetic data for 93 children (age ≤ 18 years) who received warfarin therapy were obtained. DNA was genotyped for 93 selected single nucleotide polymorphisms using a custom assay. RESULTS With a median age of 4.8 years, our cohort included more young children than most previous studies. Overall, 76.3% of dose variability was explained by weight, indication, VKORC1-1639G/A and CYP2C9 *2/*3, with genotypes accounting for 21.1% of variability. There was a strong correlation (R(2) = 0.68; P < 0.001) between actual and predicted warfarin dose using a pediatric genotype-based dosing model. VKORC1 genotype had a significant impact on time to therapeutic international normalized ratio (INR) (P = 0.047) and time to over-anticoagulation (INR > 4; P = 0.024) during the initiation of therapy. CYP2C9*3 carriers were also at increased risk of major bleeding while receiving warfarin (adjusted OR = 11.28). An additional variant in CYP2C9 (rs7089580) was significantly associated with warfarin dose (P = 0.020) in a multivariate clinical and genetic model. CONCLUSIONS This study confirms the importance of VKORC1/CYP2C9 genotypes for warfarin dosing in a young pediatric cohort and demonstrates an impact of genetic factors on clinical outcomes in children. Furthermore, we identified an additional variant in CYP2C9 of potential relevance for warfarin dosing in children.
Resumo:
OBJECTIVES HIV infection has been associated with an increased risk of chronic kidney disease (CKD). Little is known about the prevalence of CKD in individuals with high CD4 cell counts prior to initiation of antiretroviral therapy (ART). We sought to address this knowledge gap. METHODS We describe the prevalence of CKD among 4637 ART-naïve adults (mean age 36.8 years) with CD4 cell counts > 500 cells/μL at enrolment in the Strategic Timing of AntiRetroviral Treatment (START) study. CKD was defined by estimated glomerular filtration rate (eGFR) < 60 mL/min/1.73 m(2) and/or dipstick urine protein ≥ 1+. Logistic regression was used to identify baseline characteristics associated with CKD. RESULTS Among 286 [6.2%; 95% confidence interval (CI) 5.5%, 6.9%] participants with CKD, the majority had isolated proteinuria. A total of 268 participants had urine protein ≥ 1+, including 41 with urine protein ≥ 2+. Only 22 participants (0.5%) had an estimated glomerular filtration rate < 60 mL/min/1.73 m(2) , including four who also had proteinuria. Baseline characteristics independently associated with CKD included diabetes [adjusted odds ratio (aOR) 1.73; 95% CI 1.05, 2.85], hypertension (aOR 1.82; 95% CI 1.38, 2.38), and race/ethnicity (aOR 0.59; 95% CI 0.37, 0.93 for Hispanic vs. white). CONCLUSIONS We observed a low prevalence of CKD associated with traditional CKD risk factors among ART-naïve clinical trial participants with CD4 cell counts > 500 cells/μL.
Resumo:
PURPOSE Rapid assessment and intervention is important for the prognosis of acutely ill patients admitted to the emergency department (ED). The aim of this study was to prospectively develop and validate a model predicting the risk of in-hospital death based on all available information available at the time of ED admission and to compare its discriminative performance with a non-systematic risk estimate by the triaging first health-care provider. METHODS Prospective cohort analysis based on a multivariable logistic regression for the probability of death. RESULTS A total of 8,607 consecutive admissions of 7,680 patients admitted to the ED of a tertiary care hospital were analysed. Most frequent APACHE II diagnostic categories at the time of admission were neurological (2,052, 24 %), trauma (1,522, 18 %), infection categories [1,328, 15 %; including sepsis (357, 4.1 %), severe sepsis (249, 2.9 %), septic shock (27, 0.3 %)], cardiovascular (1,022, 12 %), gastrointestinal (848, 10 %) and respiratory (449, 5 %). The predictors of the final model were age, prolonged capillary refill time, blood pressure, mechanical ventilation, oxygen saturation index, Glasgow coma score and APACHE II diagnostic category. The model showed good discriminative ability, with an area under the receiver operating characteristic curve of 0.92 and good internal validity. The model performed significantly better than non-systematic triaging of the patient. CONCLUSIONS The use of the prediction model can facilitate the identification of ED patients with higher mortality risk. The model performs better than a non-systematic assessment and may facilitate more rapid identification and commencement of treatment of patients at risk of an unfavourable outcome.
Resumo:
BACKGROUND The risk factors and clinical sequelae of gastrointestinal bleeding (GIB) in the current era of drug-eluting stents, prolonged dual antiplatelet therapy, and potent P2Y12 inhibitors are not well established. We determined the frequency, predictors, and clinical impact of GIB after percutaneous coronary interventions (PCIs) in a contemporary cohort of consecutive patients treated with unrestricted use of drug-eluting stents. METHODS AND RESULTS Between 2009 and 2012, all consecutive patients undergoing PCI were prospectively included in the Bern PCI Registry. Bleeding Academic Research Consortium (BARC) GIB and cardiovascular outcomes were recorded within 1 year of follow-up. Among 6212 patients, 84.1% received new-generation drug-eluting stents and 19.5% received prasugrel. At 1 year, GIB had occurred in 65 patients (1.04%); 70.8% of all events and 84.4% of BARC ≥3B events were recorded >30 days after PCI. The majority of events (64.4%) were related to upper GIB with a more delayed time course compared with lower GIB. Increasing age, previous GIB, history of malignancy, smoking, and triple antithrombotic therapy (ie, oral anticoagulation plus dual antiplatelet therapy) were independent predictors of GIB in multivariable analysis. GIB was associated with increased all-cause mortality (adjusted hazard ratio, 3.40; 95% confidence interval, 1.67-6.92; P=0.001) and the composite of death, myocardial infarction, or stroke (adjusted hazard ratio, 3.75; 95% confidence interval, 1.99-7.07; P<0.001) and was an independent predictor of all-cause mortality during 1 year. CONCLUSIONS Among unselected patients undergoing PCI, GIB has a profound effect on prognosis. Triple antithrombotic therapy emerged as the single drug-related predictor of GIB in addition to patient-related risk factors within 1 year of PCI. CLINICAL TRIAL REGISTRATION URL: http://www.clinicaltrials.gov. Unique identifier: NCT02241291.
Resumo:
BACKGROUND We previously reported the 5-year followup of hips with femoroacetabular impingement (FAI) that underwent surgical hip dislocation with trimming of the head-neck junction and/or acetabulum including reattachment of the labrum. The goal of this study was to report a concise followup of these patients at a minimum 10 years. QUESTIONS/PURPOSES We asked if these patients had (1) improved hip pain and function; we then determined (2) the 10-year survival rate and (3) calculated factors predicting failure. METHODS Between July 2001 and March 2003, we performed surgical hip dislocation and femoral neck osteoplasty and/or acetabular rim trimming with labral reattachment in 75 patients (97 hips). Of those, 72 patients (93 hips [96%]) were available for followup at a minimum of 10 years (mean, 11 years; range, 10-13 years). We used the anterior impingement test to assess pain and the Merle d'Aubigné-Postel score to assess function. Survivorship calculation was performed using the method of Kaplan and Meier and any of the following factors as a definition of failure: conversion to total hip arthroplasty (THA), radiographic evidence of worsening osteoarthritis (OA), or a Merle d'Aubigné-Postel score less than 15. Predictive factors for any of these failures were calculated using the Cox regression analysis. RESULTS At 10-year followup, the prevalence of a positive impingement test decreased from preoperative 95% to 38% (p < 0.001) and the Merle d'Aubigné-Postel score increased from preoperative 15.3 ± 1.4 (range, 9-17) to 16.9 ± 1.3 (12-18; p < 0.001). Survivorship of these procedures for any of the defined failures was 80% (95% confidence interval, 72%-88%). The strongest predictors of failure were age > 40 years (hazard ratio with 95% confidence interval, 5.9 [4.8-7.1], p = 0.002), body mass index > 30 kg/m(2) (5.5 [3.9-7.2], p = 0.041), a lateral center-edge angle < 22° or > 32° (5.4 [4.2-6.6], p = 0.006), and a posterior acetabular coverage < 34% (4.8 [3.7-5.6], p = 0.006). CONCLUSIONS At 10-year followup, 80% of patients with FAI treated with surgical hip dislocation, osteoplasty, and labral reattachment had not progressed to THA, developed worsening OA, or had a Merle d'Aubigné-Postel score of less than 15. Radiographic predictors for failure were related to over- and undertreatment of acetabular rim trimming.
Resumo:
Acute-on-chronic liver failure (ACLF) is characterized by acute decompensation (AD) of cirrhosis, organ failure(s), and high 28-day mortality. We investigated whether assessments of patients at specific time points predicted their need for liver transplantation (LT) or the potential futility of their care. We assessed clinical courses of 388 patients who had ACLF at enrollment, from February through September 2011, or during early (28-day) follow-up of the prospective multicenter European Chronic Liver Failure (CLIF) ACLF in Cirrhosis study. We assessed ACLF grades at different time points to define disease resolution, improvement, worsening, or steady or fluctuating course. ACLF resolved or improved in 49.2%, had a steady or fluctuating course in 30.4%, and worsened in 20.4%. The 28-day transplant-free mortality was low-to-moderate (6%-18%) in patients with nonsevere early course (final no ACLF or ACLF-1) and high-to-very high (42%-92%) in those with severe early course (final ACLF-2 or -3) independently of initial grades. Independent predictors of course severity were CLIF Consortium ACLF score (CLIF-C ACLFs) and presence of liver failure (total bilirubin ≥12 mg/dL) at ACLF diagnosis. Eighty-one percent had their final ACLF grade at 1 week, resulting in accurate prediction of short- (28-day) and mid-term (90-day) mortality by ACLF grade at 3-7 days. Among patients that underwent early LT, 75% survived for at least 1 year. Among patients with ≥4 organ failures, or CLIF-C ACLFs >64 at days 3-7 days, and did not undergo LT, mortality was 100% by 28 days. CONCLUSIONS Assessment of ACLF patients at 3-7 days of the syndrome provides a tool to define the emergency of LT and a rational basis for intensive care discontinuation owing to futility.
Resumo:
Intrahepatic cholangiocarcinomas are the second most common primary liver malignancies with an increasing incidence over the past decades. Due to a lack of early symptoms and their aggressive oncobiological behavior, the diagnostic approach is challenging and the outcome remains unsatisfactory with a poor prognosis. Thus, a consistent staging system for a comparison between different therapeutic approaches is needed, but independent predictors for worse survival are still controversial. Currently, four different staging systems are primarily used, which differ in the way they determine the 'T' category. Furthermore, different nomograms and prognostic models have been recently proposed and may be helpful in providing additional information for predicting the prognosis and therefore be helpful in approaching an adequate treatment strategy. This review will discuss the diagnostic approach to intrahepatic cholangiocarcinoma as well as compare and contrast the most current staging systems and prognostic models.
Resumo:
OBJECTIVE To describe all patients admitted to children's hospitals in Switzerland with a diagnosis of influenza A/H1N1/09 virus infection during the 2009 influenza pandemic, and to analyse their characteristics, predictors of complications, and outcome. METHODS All patients ≤18-years-old hospitalised in eleven children's hospitals in Switzerland between June 2009 and January 2010 with a positive influenza A/H1N1/09 reverse transcriptase polymerase chain reaction (RT-PCR) from a nasopharyngeal specimen were included. RESULTS There were 326 PCR-confirmed patients of whom 189 (58%) were younger than 5 years of age, and 126 (38.7%) had one or more pre-existing medical condition. Fever (median 39.1 °C) was the most common sign (85.6% of all patients), while feeding problems (p = 0.003) and febrile seizures (p = 0.016) were significantly more frequent in children under 5 years. In 142 (43.6%) patients there was clinical suspicion of a concomitant bacterial infection, which was confirmed in 36 patients (11%). However, severe bacterial infection was observed in 4% of patients. One third (n = 108, 33.1%) of the patients were treated with oseltamivir, 64 (59.3%, or 20% overall) within 48 hours of onset of symptoms. Almost half of the patients (45.1%) received antibiotics for a median of 7 days. Twenty patients (6.1%) required intensive care, mostly for complicated pneumonia (50%) without an underlying medical condition. The median duration of hospitalisation was 2 days (range 0-39) for 304 patients. Two children (<15 months of age with underlying disease) died. CONCLUSIONS Although pandemic influenza A/H1N1/09 virus infection in children is mostly mild, it can be severe, regardless of past history or underlying disease.
Resumo:
OBJECTIVES To longitudinally map the onset and identify risk factors for skin sclerosis and digital ulcers (DUs) in patients with systemic sclerosis (SSc) from an early time point after the onset of Raynaud's phenomenon (RP) in the European Scleroderma Trials and Research (EUSTAR) cohort. METHODS 695 patients with SSc with a baseline visit within 1 year after RP onset were followed in the prospective multinational EUSTAR database. During the 10-year observation period, cumulative probabilities of cutaneous lesions were assessed with the Kaplan-Meier method. Cox proportional hazards regression analysis was used to evaluate risk factors. RESULTS The median modified Rodnan skin score (mRSS) peaked 1 year after RP onset, and was 15 points. The 1-year probability to develop an mRSS ≥2 in at least one area of the arms and legs was 69% and 25%, respectively. Twenty-five per cent of patients developed diffuse cutaneous involvement in the first year after RP onset. This probability increased to 36% during the subsequent 2 years. Only 6% of patients developed diffuse cutaneous SSc thereafter. The probability to develop DUs increased to a maximum of 70% at the end of the 10-year observation. The main factors associated with diffuse cutaneous SSc were the presence of anti-RNA polymerase III autoantibodies, followed by antitopoisomerase autoantibodies and male sex. The main factor associated with incident DUs was the presence of antitopoisomerase autoantibodies. CONCLUSION Early after RP onset, cutaneous manifestations exhibit rapid kinetics in SSc. This should be accounted for in clinical trials aiming to prevent skin worsening.
Resumo:
BACKGROUND Cardiac troponin detected by new-generation, highly sensitive assays predicts clinical outcomes among patients with stable coronary artery disease (SCAD) treated medically. The prognostic value of baseline high-sensitivity cardiac troponin T (hs-cTnT) elevation in SCAD patients undergoing elective percutaneous coronary interventions is not well established. This study assessed the association of preprocedural levels of hs-cTnT with 1-year clinical outcomes among SCAD patients undergoing percutaneous coronary intervention. METHODS AND RESULTS Between 2010 and 2014, 6974 consecutive patients were prospectively enrolled in the Bern Percutaneous Coronary Interventions Registry. Among patients with SCAD (n=2029), 527 (26%) had elevated preprocedural hs-cTnT above the upper reference limit of 14 ng/L. The primary end point, mortality within 1 year, occurred in 20 patients (1.4%) with normal hs-cTnT versus 39 patients (7.7%) with elevated baseline hs-cTnT (P<0.001). Patients with elevated hs-cTnT had increased risks of all-cause (hazard ratio 5.73; 95% confidence intervals 3.34-9.83; P<0.001) and cardiac mortality (hazard ratio 4.68; 95% confidence interval 2.12-10.31; P<0.001). Preprocedural hs-TnT elevation remained an independent predictor of 1-year mortality after adjustment for relevant risk factors, including age, sex, and renal failure (adjusted hazard ratio 2.08; 95% confidence interval 1.10-3.92; P=0.024). A graded mortality risk was observed across higher tertiles of elevated preprocedural hs-cTnT, but not among patients with hs-cTnT below the upper reference limit. CONCLUSIONS Preprocedural elevation of hs-cTnT is observed in one fourth of SCAD patients undergoing elective percutaneous coronary intervention. Increased levels of preprocedural hs-cTnT are proportionally related to the risk of death and emerged as independent predictors of all-cause mortality within 1 year. CLINICAL TRIAL REGISTRATION URL: http://www.clinicaltrials.gov. Unique identifier: NCT02241291.
Resumo:
Background. A review of the literature suggests that Hypertension (HTN) in older adults is associated with sympathetic stimulation that results in increasing blood pressure (BP) reactivity. If clinical assessment of BP captured sympathetic stimulation, it would be valuable for hypertension management. ^ Objectives. The study examined whether reactive change scores from a short BPR protocol, resting blood pressure (BP), or resting pulse pressure (PP) is a better predictor of 24 hour ambulatory BP and BP load in cardiac patients. ^ Method. The study used a single-group design, with both an experimental clinical component and an observational field component. Both components used repeated measurement methods. The study population consisted of 45 adult patients with a mean age of 64.6 ± 8.5 years who were diagnosed with cardiac disease and who were taking anti-hypertensive medication. Blood pressure reactivity was operationalized with a speech protocol. During the speech protocol, BP was measured with an automatic device (Dinamap 825XT) while subjects talked about their health and about their usual day. Twenty-four hour ambulatory BP measurement (Spacelabs 90207 monitor) followed the speech protocol. ^ Results. Resting SBP and resting PP were significant predictors of 24-hour SBP, and resting SBP was a significant predictor of SBP load. No predictors were significant of 24-hour DBP or DBP load. ^ Conclusions. Initial resting BP and PP may be used in clinical settings to assess hypertension management. Future studies are necessary to confirm the ability of resting BP to predict ABP and BP load in older, medicated hypertensives. ^