138 resultados para low risk population
Resumo:
BACKGROUND: Clinical scores may help physicians to better assess the individual risk/benefit of oral anticoagulant therapy. We aimed to externally validate and compare the prognostic performance of 7 clinical prediction scores for major bleeding events during oral anticoagulation therapy. METHODS: We followed 515 adult patients taking oral anticoagulants to measure the first major bleeding event over a 12-month follow-up period. The performance of each score to predict the risk of major bleeding and the physician's subjective assessment of bleeding risk were compared with the C statistic. RESULTS: The cumulative incidence of a first major bleeding event during follow-up was 6.8% (35/515). According to the 7 scoring systems, the proportions of major bleeding ranged from 3.0% to 5.7% for low-risk, 6.7% to 9.9% for intermediate-risk, and 7.4% to 15.4% for high-risk patients. The overall predictive accuracy of the scores was poor, with the C statistic ranging from 0.54 to 0.61 and not significantly different from each other (P=.84). Only the Anticoagulation and Risk Factors in Atrial Fibrillation score performed slightly better than would be expected by chance (C statistic, 0.61; 95% confidence interval, 0.52-0.70). The performance of the scores was not statistically better than physicians' subjective risk assessments (C statistic, 0.55; P=.94). CONCLUSION: The performance of 7 clinical scoring systems in predicting major bleeding events in patients receiving oral anticoagulation therapy was poor and not better than physicians' subjective assessments.
Resumo:
Active surveillance in prostate cancer The spread of PSA in the screening of prostate cancer has almost doubled the incidence of this disease in the last twenty years. An improved understanding of the natural history of this cancer allows for risk stratification of the disease and to better predict insignificant prostate cancer. Active surveillance has recently been proposed as a new option to delay or avoid a radical treatment for patients with low-risk disease. The principle, results and future perspectives of this treatment modality are discussed in this review.
Resumo:
OBJECTIVES: Evaluation of the clinical impact of multiple infections of the cervix by human papillomavirus, including human papillomavirus-16, compared with single human papillomavirus-16 infection. STUDY DESIGN: One hundred sixty-nine women were classified in 3 categories depending on their human papillomavirus profile: human papillomavirus-16 only, human papillomavirus-16 and low-risk type(s), and human papillomavirus-16 and other high-risk type(s). Cervical brush samples were analyzed for human papillomavirus DNA by polymerase chain reaction and reverse line blot hybridization. All women were evaluated with colposcopy during 24 months or more. Management was according to the Bethesda recommendations. RESULTS: Women infected with human papillomavirus-16 and other high-risk human papillomavirus type(s) presented more progression or no change in the grade of dysplasia, compared with women of the other groups (relative risk [RR], 1.39; 95% confidence interval [CI], 1.07-1.82; P = .02 at 6 months; RR, 2.10; 95% CI, 1.46-3.02; P < .001 at 12 months; RR, 1.82; 95% CI, 1.21-2.72; P = .004 at 24 months). CONCLUSION: Coinfection of women with human papillomavirus-16 and other high-risk human papillomavirus type(s) increases the risk of unfavorable evolution.
Resumo:
Background Individual signs and symptoms are of limited value for the diagnosis of influenza. Objective To develop a decision tree for the diagnosis of influenza based on a classification and regression tree (CART) analysis. Methods Data from two previous similar cohort studies were assembled into a single dataset. The data were randomly divided into a development set (70%) and a validation set (30%). We used CART analysis to develop three models that maximize the number of patients who do not require diagnostic testing prior to treatment decisions. The validation set was used to evaluate overfitting of the model to the training set. Results Model 1 has seven terminal nodes based on temperature, the onset of symptoms and the presence of chills, cough and myalgia. Model 2 was a simpler tree with only two splits based on temperature and the presence of chills. Model 3 was developed with temperature as a dichotomous variable (≥38°C) and had only two splits based on the presence of fever and myalgia. The area under the receiver operating characteristic curves (AUROCC) for the development and validation sets, respectively, were 0.82 and 0.80 for Model 1, 0.75 and 0.76 for Model 2 and 0.76 and 0.77 for Model 3. Model 2 classified 67% of patients in the validation group into a high- or low-risk group compared with only 38% for Model 1 and 54% for Model 3. Conclusions A simple decision tree (Model 2) classified two-thirds of patients as low or high risk and had an AUROCC of 0.76. After further validation in an independent population, this CART model could support clinical decision making regarding influenza, with low-risk patients requiring no further evaluation for influenza and high-risk patients being candidates for empiric symptomatic or drug therapy.
Resumo:
Background: Recently, more clinical trials are being conducted in Africa and Asia, therefore, background morbidity in the respective populations is of interest. Between 2000 and 2007, the International AIDS Vaccine Initiative sponsored 19 Phase 1 or 2A preventive HIV vaccine trials in the US, Europe, Sub-Saharan Africa and India, enrolling 900 healthy HIV-1 uninfected volunteers. Objective To assess background morbidity as reflected by unsolicited adverse events (AEs), unrelated to study vaccine, reported in clinical trials from four continents. Methods All but three clinical trials were double-blind, randomized, and placebo-controlled. Study procedures and data collection methods were standardized. The frequency and severity of AEs reported during the first year of the trials were analyzed. To avoid confounding by vaccine-related events, solicited reactogenicity and other AEs occurring within 28 d after any vaccination were excluded. Results In total, 2134 AEs were reported by 76% of all participants; 73% of all events were mild. The rate of AEs did not differ between placebo and vaccine recipients. Overall, the percentage of participants with any AE was higher in Africa (83%) compared with Europe (71%), US (74%) and India (65%), while the percentage of participants with AEs of moderate or greater severity was similar in all regions except India. In all regions, the most frequently reported AEs were infectious diseases, followed by gastrointestinal disorders. Conclusions Despite some regional differences, in these healthy participants selected for low risk of HIV infection, background morbidity posed no obstacle to clinical trial conduct and interpretation. Data from controlled clinical trials of preventive interventions can offer valuable insights into the health of the eligible population.
Resumo:
To create an instrument to be used in an outpatient clinic to detect adolescents prone to risk-taking behaviours. Based on previous research, five identified variables (relationship with parents and teachers, liking going to school, average grades, and level of religiosity) were used to create a screening tool to detect at least one of ten risky behaviours (tobacco, alcohol, cannabis and other illegal drugs use; sexual intercourse and sexual risky behaviour; driving while intoxicated, riding with an intoxicated driver, not always using a seat belt, and not always using a helmet). The instrument was tested using the Barcelona Adolescent Health Survey 1993. A Receiver Operating Characteristics curve was used to find the best cut-off point between high and low risk score. Odds ratios and 95% confidence intervals were calculated to detect at least one risky behaviour and for each individual behaviour. In order to assess its predictive value, the analysis was repeated using the Barcelona Adolescent Health Survey 1999. In both cases, analyses were conducted for the whole sample and for younger and older adolescents. Adolescents with a high-risk score were more likely to take at least one risky behaviour both when the whole sample was analysed and by age groups. With very few exceptions, the Behaviour Evaluation for Risk-Taking Adolescents showed significant odds ratios for each individual variable. CONCLUSION: The Behaviour Evaluation for Risk-Taking Adolescents has shown its potential as an easy to use instrument to screen for risk-taking behaviours. Future research must aim towards assessing this instrument's predictive value in the clinical setting and it's application to other populations.
Resumo:
OBJECTIVE: Whether or not a high risk of falls increases the risk of bleeding in patients receiving anticoagulants remains a matter of debate. METHODS: We conducted a prospective cohort study involving 991 patients ≥65 years of age who received anticoagulants for acute venous thromboembolism (VTE) at nine Swiss hospitals between September 2009 and September 2012. The study outcomes were as follows: the time to a first major episode of bleeding; and clinically relevant nonmajor bleeding. We determined the associations between the risk of falls and the time to a first episode of bleeding using competing risk regression, accounting for death as a competing event. We adjusted for known bleeding risk factors and anticoagulation as a time-varying covariate. RESULTS: Four hundred fifty-eight of 991 patients (46%) were at high risk of falls. The mean duration of follow-up was 16.7 months. Patients at high risk of falls had a higher incidence of major bleeding (9.6 vs. 6.6 events/100 patient-years; P = 0.05) and a significantly higher incidence of clinically relevant nonmajor bleeding (16.7 vs. 8.3 events/100 patient-years; P < 0.001) than patients at low risk of falls. After adjustment, a high risk of falls was associated with clinically relevant nonmajor bleeding [subhazard ratio (SHR) = 1.74, 95% confidence interval (CI) = 1.23-2.46], but not with major bleeding (SHR = 1.24, 95% CI = 0.83-1.86). CONCLUSION: In elderly patients who receive anticoagulants because of VTE, a high risk of falls is significantly associated with clinically relevant nonmajor bleeding, but not with major bleeding. Whether or not a high risk of falls is a reason against providing anticoagulation beyond 3 months should be based on patient preferences and the risk of VTE recurrence.
Resumo:
BACKGROUND: Prevention of cardiovascular disease (CVD) at the individual level should rely on the assessment of absolute risk using population-specific risk tables. OBJECTIVE: To compare the predictive accuracy of the original and the calibrated SCORE functions regarding 10-year cardiovascular risk in Switzerland. DESIGN: Cross-sectional, population-based study (5773 participants aged 35-74 years). METHODS: The SCORE equation for low-risk countries was calibrated based on the Swiss CVD mortality rates and on the CVD risk factor levels from the study sample. The predicted number of CVD deaths after a 10-year period was computed from the original and the calibrated equations and from the observed cardiovascular mortality for 2003. RESULTS: According to the original and calibrated functions, 16.3 and 15.8% of men and 8.2 and 8.9% of women, respectively, had a 10-year CVD risk > or =5%. Concordance correlation coefficient between the two functions was 0.951 for men and 0.948 for women, both P<0.001. Both risk functions adequately predicted the 10-year cumulative number of CVD deaths: in men, 71 (original) and 74 (calibrated) deaths for 73 deaths when using the CVD mortality rates; in women, 44 (original), 45 (calibrated) and 45 (CVD mortality rates), respectively. Compared to the original function, the calibrated function classified more women and fewer men at high-risk. Moreover, the calibrated function gave better risk estimates among participants aged over 65 years. CONCLUSION: The original SCORE function adequately predicts CVD death in Switzerland, particularly for individuals aged less than 65 years. The calibrated function provides more reliable estimates for older individuals.
Resumo:
Cancer testis antigens (CTAs) are expressed in a variety of malignant tumors but not in any normal adult tissues except germ cells and occasionally placenta. Because of this tumor-associated pattern of expression, CTAs are regarded as potential vaccine targets. The expression of CTAs in gastrointestinal stromal tumors (GIST) has not been analyzed systematically previously. The present study was performed to analyze the expression of CTA in GIST and to determine if CTA expression correlates with prognosis. Thirty-five GIST patients were retrospectively analyzed for their expression of CTAs by immunohistochemistry using the following monoclonal antibodies (mAb/antigen): MA454/MAGE-A1, M3H67/MAGE-A3, 57B/MAGE-A4, CT7-33/MAGE-C1 and E978/NY-ESO-1. Fourteen tumors (40%) expressed 1 or more of the 5 CTAs tested. Fourteen percent (n = 5/35) were positive for MAGE-A1, MAGE-A3 or MAGE-A4, respectively. Twenty-six percent (n = 9/35) stained positive for MAGE-C1 and 20% (n = 7/35) for NY-ESO-1. A highly significant correlation between CTA expression and tumor recurrence risk was observed (71% vs. 29%; p = 0.027). In our study population, the high-risk GIST expressed CTAs more frequently than low-risk GIST (p = 0.012). High-risk GISTs which stained positive for at least 1 CTA, recurred in 100% (n = 25) of the cases. This is the first study analyzing CTA expression in GIST and its prognostic value for recurrence. The CTA staining could add information to the individual patient prognosis and represent an interesting target for future treatment strategies.
Resumo:
The goal of this interdisciplinary study is to better understand the land use factors that increase vulnerability of mountain areas in northern Pakistan. The study will identify and analyse the damages and losses caused by the October 2005 earthquake in two areas of the same valley: one "low-risk" watershed with sound natural resources management, the other, "high-risk" in an ecologically degraded watershed. Secondly, the study will examine natural and man-made causes of secondary hazards in the study area, especially landslides; and third it will evaluate the cost of the earthquake damage in the study areas on the livelihoods of local communities and the sub-regional economy. There are few interdisciplinary studies to have correlated community land use practices, resources management, and disaster risk reduction in high-risk mountain areas. By better understanding these linkages, development- humanitarian- and donor agencies focused on disaster reduction can improve their risk reduction programs for mountainous regions.
Resumo:
Objectives The relevance of the SYNTAX score for the particular case of patients with acute ST- segment elevation myocardial infarction (STEMI) undergoing primary percutaneous coronary intervention (PPCI) has previously only been studied in the setting of post hoc analysis of large prospective randomized clinical trials. A "real-life" population approach has never been explored before. The aim of this study was to evaluate the impact of the SYNTAX score for the prediction of the myocardial infarction size, estimated by the creatin-kinase (CK) peak value, using the SYNTAX score in patients treated with primary coronary intervention for acute ST-segment elevation myocardial infarction. Methods The primary endpoint of the study was myocardial infarction size as measured by the CK peak value. The SYNTAX score was calculated retrospectively in 253 consecutive patients with acute ST-segment elevation myocardial infarction (STEMI) undergoing primary percutaneous coronary intervention (PPCI) in a large tertiary referral center in Switzerland, between January 2009 and June 2010. Linear regression analysis was performed to compare myocardial infarction size with the SYNTAX score. This same endpoint was then stratified according to SYNTAX score tertiles: low <22 (n=178), intermediate [22-32] (n=60), and high >=33 (n=15). Results There were no significant differences in terms of clinical characteristics between the three groups. When stratified according to the SYNTAX score tertiles, average CK peak values of 1985 (low<22), 3336 (intermediate [22-32]) and 3684 (high>=33) were obtained with a p-value <0.0001. Bartlett's test for equal variances between the three groups was 9.999 (p-value <0.0067). A moderate Pearson product-moment correlation coefficient (r=0.4074) with a high statistical significance level (p-value <0.0001) was found. The coefficient of determination (R^2=0.1660) showed that approximately 17% of the variation of CK peak value (myocardial infarction size) could be explained by the SYNTAX score, i.e. by the coronary disease complexity. Conclusion In an all-comers population, the SYNTAX score is an additional tool in predicting myocardial infarction size in patients treated with primary percutaneous coronary intervention (PPCI). The stratification of patients in different risk groups according to SYNTAX enables to identify a high-risk population that may warrant particular patient care.
Resumo:
BACKGROUND: Since 1981 Princess Margaret Hospital has used initial active surveillance (AS) with delayed treatment at relapse as the preferred management for all patients with clinical stage I nonseminomatous germ cell tumors (NSGCT). OBJECTIVE: Our aim was to report our overall AS experience and compare outcomes over different periods using this non-risk-adapted approach. DESIGN, SETTING, AND PARTICIPANTS: Three hundred and seventy-one patients with stage I NSGCT were managed by AS from 1981 to 2005. For analysis by time period, patients were divided into two cohorts by diagnosis date: initial cohort, 1981-1992 (n=157), and recent cohort, 1993-2005 (n=214). INTERVENTION: Patients were followed at regular intervals, and treatment was only given for relapse. MEASUREMENTS: Recurrence rates, time to relapse, risk factors for recurrence, disease-specific survival, and overall survival were determined. RESULTS AND LIMITATIONS: With a median follow-up of 6.3 yr, 104 patients (28%) relapsed: 53 of 157 (33.8%) in the initial group and 51 of 214 (23.8%) in the recent group. Median time to relapse was 7 mo. Lymphovascular invasion (p<0.0001) and pure embryonal carcinoma (p=0.02) were independent predictors of recurrence; 125 patients (33.7%) were designated as high risk based on the presence of one or both factors. In the initial cohort, 66 of 157 patients (42.0%) were high risk and 36 of 66 patients (54.5%) relapsed versus 17 of 91 low-risk patients (18.7%) (p<0.0001). In the recent cohort, 59 of 214 patients (27.6%) were high risk and 29 of 59 had a recurrence (49.2%) versus 22 of 155 low-risk patients (14.2%) (p<0.0001). Three patients (0.8%) died from testis cancer. The estimated 5-yr disease-specific survival was 99.3% in the initial group and 98.9% in the recent one. CONCLUSIONS: Non-risk-adapted surveillance is an effective, simple strategy for the management of all stage I NSGCT.
Resumo:
It has been recently shown (Seddiki, N., B. Santner-Nanan, J. Martinson, J. Zaunders, S. Sasson, A. Landay, M. Solomon, W. Selby, S.I. Alexander, R. Nanan, et al. 2006. J. Exp. Med. 203:1693-1700.) that the expression of interleukin (IL) 7 receptor (R) alpha discriminates between two distinct CD4 T cell populations, both characterized by the expression of CD25, i.e. CD4 regulatory T (T reg) cells and activated CD4 T cells. T reg cells express low levels of IL-7Ralpha, whereas activated CD4 T cells are characterized by the expression of IL-7Ralpha(high). We have investigated the distribution of these two CD4 T cell populations in 36 subjects after liver and kidney transplantation and in 45 healthy subjects. According to a previous study (Demirkiran, A., A. Kok, J. Kwekkeboom, H.J. Metselaar, H.W. Tilanus, and L.J. van der Laan. 2005. Transplant. Proc. 37:1194-1196.), we observed that the T reg CD25(+)CD45RO(+)IL-7Ralpha(low) cell population was reduced in transplant recipients (P < 0.00001). Interestingly, the CD4(+)CD25(+)CD45RO(+)IL-7Ralpha(high) cell population was significantly increased in stable transplant recipients compared with healthy subjects (P < 0.00001), and the expansion of this cell population was even greater in patients with documented humoral chronic rejection compared with stable transplant recipients (P < 0.0001). The expanded CD4(+)CD25(+)CD45RO(+)IL-7Ralpha(high) cell population contained allospecific CD4 T cells and secreted effector cytokines such as tumor necrosis factor alpha and interferon gamma, thus potentially contributing to the mechanisms of chronic rejection. More importantly, CD4(+)IL-7Ralpha(+)and CD25(+)IL-7Ralpha(+) cells were part of the T cell population infiltrating the allograft of patients with a documented diagnosis of chronic humoral rejection. These results indicate that the CD4(+)CD25(+)IL-7Ralpha(+) cell population may represent a valuable, sensitive, and specific marker to monitor allospecific CD4 T cell responses both in blood and in tissues after organ transplantation.
Resumo:
BACKGROUND: The risk of falls is the most commonly cited reason for not providing oral anticoagulation, although the risk of bleeding associated with falls on oral anticoagulants is still debated. We aimed to evaluate whether patients on oral anticoagulation with high falls risk have an increased risk of major bleeding. METHODS: We prospectively studied consecutive adult medical patients who were discharged on oral anticoagulants. The outcome was the time to a first major bleed within a 12-month follow-up period adjusted for age, sex, alcohol abuse, number of drugs, concomitant treatment with antiplatelet agents, and history of stroke or transient ischemic attack. RESULTS: Among the 515 enrolled patients, 35 patients had a first major bleed during follow-up (incidence rate: 7.5 per 100 patient-years). Overall, 308 patients (59.8%) were at high risk of falls, and these patients had a nonsignificantly higher crude incidence rate of major bleeding than patients at low risk of falls (8.0 vs 6.8 per 100 patient-years, P=.64). In multivariate analysis, a high falls risk was not statistically significantly associated with the risk of a major bleed (hazard ratio 1.09; 95% confidence interval, 0.54-2.21). Overall, only 3 major bleeds occurred directly after a fall (incidence rate: 0.6 per 100 patient-years). CONCLUSIONS: In this prospective cohort, patients on oral anticoagulants at high risk of falls did not have a significantly increased risk of major bleeds. These findings suggest that being at risk of falls is not a valid reason to avoid oral anticoagulants in medical patients.
Resumo:
INTRODUCTION: We investigated whether mRNA levels of E2F1, a key transcription factor involved in proliferation, differentiation and apoptosis, could be used as a surrogate marker for the determination of breast cancer outcome. METHODS: E2F1 and other proliferation markers were measured by quantitative RT-PCR in 317 primary breast cancer patients from the Stiftung Tumorbank Basel. Correlations to one another as well as to the estrogen receptor and ERBB2 status and clinical outcome were investigated. Results were validated and further compared with expression-based prognostic profiles using The Netherlands Cancer Institute microarray data set reported by Fan and colleagues. RESULTS: E2F1 mRNA expression levels correlated strongly with the expression of other proliferation markers, and low values were mainly found in estrogen receptor-positive and ERBB2-negative phenotypes. Patients with low E2F1-expressing tumors were associated with favorable outcome (hazard ratio = 4.3 (95% confidence interval = 1.8-9.9), P = 0.001). These results were consistent in univariate and multivariate Cox analyses, and were successfully validated in The Netherlands Cancer Institute data set. Furthermore, E2F1 expression levels correlated well with the 70-gene signature displaying the ability of selecting a common subset of patients at good prognosis. Breast cancer patients' outcome was comparably predictable by E2F1 levels, by the 70-gene signature, by the intrinsic subtype gene classification, by the wound response signature and by the recurrence score. CONCLUSION: Assessment of E2F1 at the mRNA level in primary breast cancer is a strong determinant of breast cancer patient outcome. E2F1 expression identified patients at low risk of metastasis irrespective of the estrogen receptor and ERBB2 status, and demonstrated similar prognostic performance to different gene expression-based predictors.