28 resultados para Subsequent Risk
Resumo:
INTRODUCTION: For decades, clinicians dealing with immunocompromised and critically ill patients have perceived a link between Candida colonization and subsequent infection. However, the pathophysiological progression from colonization to infection was clearly established only through the formal description of the colonization index (CI) in critically ill patients. Unfortunately, the literature reflects intense confusion about the pathophysiology of invasive candidiasis and specific associated risk factors. METHODS: We review the contribution of the CI in the field of Candida infection and its development in the 20 years following its original description in 1994. The development of the CI enabled an improved understanding of the pathogenesis of invasive candidiasis and the use of targeted empirical antifungal therapy in subgroups of patients at increased risk for infection. RESULTS: The recognition of specific characteristics among underlying conditions, such as neutropenia, solid organ transplantation, and surgical and nonsurgical critical illness, has enabled the description of distinct epidemiological patterns in the development of invasive candidiasis. CONCLUSIONS: Despite its limited bedside practicality and before confirmation of potentially more accurate predictors, such as specific biomarkers, the CI remains an important way to characterize the dynamics of colonization, which increases early in patients who develop invasive candidiasis.
Resumo:
The occurrence of cardiovascular diseases (CVD) and related risk factors was evaluated in Seychelles, a middle level income country, as accumulating evidence supports increasing rates of CVD in developing countries. CVD mortality was obtained from vital statistics for two periods, 1984-5 and 1991-3. CVD morbidity was estimated by retrospective review of discharge diagnoses for all admissions to medical wards in 1990-1992. Levels of CVD risk factors in the population were assessed in 1989 through a population-based survey. In 1991-93, standardized mortality rates were in males and females respectively, 80.9 and 38.8 for cerebrovascular disease and 92.9 and 47.0 for ischemic heart disease. CVD accounted for 25.2% of all admissions to medical wards. Among the general population aged 35-64, 30% had high blood pressure, 52% of males smoked, and 28% of females were obese. These findings substantiate the current health transition to CVD in Seychelles. More generally, epidemiologic data on CVD mortality, morbidity, and related risk factors, as well as similar indicators for other chronic diseases, should more consistently appear in national and international reports of human development to help emphasize, in the health policy making scene, the current transition to chronic diseases in developing countries and the subsequent need for appropriate control and prevention programs.
Resumo:
AIMS: To investigate the relationships between gestational diabetes mellitus (GDM) and the metabolic syndrome (MS), as it was suggested that insulin resistance was the hallmark of both conditions. To analyse post-partum screening in order to identify risk factors for the subsequent development of type 2 diabetes mellitus (DM). METHODS: A retrospective analysis of all singleton pregnancies diagnosed with GDM at the Lausanne University Hospital for 3 consecutive years. Pre-pregnancy obesity, hypertension and dyslipidaemia were recorded as constituents of the MS. RESULTS: For 5788 deliveries, 159 women (2.7%) with GDM were identified. Constituents of the MS were present before GDM pregnancy in 26% (n = 37/144): 84% (n = 31/37) were obese, 38% (n = 14/37) had hypertension and 22% (n = 8/37) had dyslipidaemia. Gestational hypertension was associated with obesity (OR = 3.2, P = 0.02) and dyslipidaemia (OR = 5.4, P=0.002). Seventy-four women (47%) returned for post-partum OGTT, which was abnormal in 20 women (27%): 11% (n = 8) had type 2 diabetes and 16% (n = 12) had impaired glucose tolerance. Independent predictors of abnormal glucose tolerance in the post-partum were: having > 2 abnormal values on the diagnostic OGTT during pregnancy and presenting MS constituents (OR = 5.2, CI 1.8-23.2 and OR = 5.3, CI 1.3-22.2). CONCLUSIONS: In one fourth of GDM pregnancies, metabolic abnormalities precede the appearance of glucose intolerance. These women have a high risk of developing the MS and type 2 diabetes in later years. Where GDM screening is not universal, practitioners should be aware of those metabolic risks in every pregnant woman presenting with obesity, hypertension or dyslipidaemia, in order to achieve better diagnosis and especially better post-partum follow-up and treatment.
Resumo:
Data collected by the Cancer Registry of the Canton of Vaud, Switzerland, wer used to estimate the risk of suicide for patients diagnosed with cancer. Among 24,166 cases of invasive neoplasms other than nonmelanomatous skin cancer reported between 1976 and 1987 and followed through integrated active follow-up to the end of 1987, for a total of 57,164 person years at risk, there were 55 registered suicides vs. 21.3 expected (standardized mortality ratio, SMR = 2.6; 95% confidence interval, Cl = 2.0-3.4). The ratio was slightly, but not significantly higher for males (SMR = 2.8) than for females (SMR = 2.2) and comparable across subsequent age groups. The risk of suicide was high during the 1st year after notification (SMR = 3.9) and decreased to 2.2 between 1 and 5 years and to 1.5 over 5 years. This study suggests that the risk of suicide after a diagnosis of cancer may be greater than previously estimated from cancer registry data in Finland, Sweden, and Connecticut (USA), at least in this population of Central Europe with high overall suicide rates.
Resumo:
BACKGROUND: Cytomegalovirus (CMV) disease remains an important problem in solid-organ transplant recipients, with the greatest risk among donor CMV-seropositive, recipient-seronegative (D(+)/R(-)) patients. CMV-specific cell-mediated immunity may be able to predict which patients will develop CMV disease. METHODS: We prospectively included D(+)/R(-) patients who received antiviral prophylaxis. We used the Quantiferon-CMV assay to measure interferon-γ levels following in vitro stimulation with CMV antigens. The test was performed at the end of prophylaxis and 1 and 2 months later. The primary outcome was the incidence of CMV disease at 12 months after transplant. We calculated positive and negative predictive values of the assay for protection from CMV disease. RESULTS: Overall, 28 of 127 (22%) patients developed CMV disease. Of 124 evaluable patients, 31 (25%) had a positive result, 81 (65.3%) had a negative result, and 12 (9.7%) had an indeterminate result (negative mitogen and CMV antigen) with the Quantiferon-CMV assay. At 12 months, patients with a positive result had a subsequent lower incidence of CMV disease than patients with a negative and an indeterminate result (6.4% vs 22.2% vs 58.3%, respectively; P < .001). Positive and negative predictive values of the assay for protection from CMV disease were 0.90 (95% confidence interval [CI], .74-.98) and 0.27 (95% CI, .18-.37), respectively. CONCLUSIONS: This assay may be useful to predict if patients are at low, intermediate, or high risk for the development of subsequent CMV disease after prophylaxis. CLINICAL TRIALS REGISTRATION: NCT00817908.
Resumo:
Bone mineral density (BMD) measured by dual-energy X-ray absorptiometry (DXA) is used to diagnose osteoporosis and assess fracture risk. However, DXA cannot evaluate trabecular microarchitecture. This study used a novel software program (TBS iNsight; Med-Imaps, Geneva, Switzerland) to estimate bone texture (trabecular bone score [TBS]) from standard spine DXA images. We hypothesized that TBS assessment would differentiate women with low trauma fracture from those without. In this study, TBS was performed blinded to fracture status on existing research DXA lumbar spine (LS) images from 429 women. Mean participant age was 71.3 yr, and 158 had prior fractures. The correlation between LS BMD and TBS was low (r = 0.28), suggesting these parameters reflect different bone properties. Age- and body mass index-adjusted odds ratios (ORs) ranged from 1.36 to 1.63 for LS or hip BMD in discriminating women with low trauma nonvertebral and vertebral fractures. TBS demonstrated ORs from 2.46 to 2.49 for these respective fractures; these remained significant after lowest BMD T-score adjustment (OR = 2.38 and 2.44). Seventy-three percent of all fractures occurred in women without osteoporosis (BMD T-score > -2.5); 72% of these women had a TBS score below the median, thereby appropriately classified them as being at increased risk. In conclusion, TBS assessment enhances DXA by evaluating trabecular pattern and identifying individuals with vertebral or low trauma fracture. TBS identifies 66-70% of women with fracture who were not classified with osteoporosis by BMD alone.
Resumo:
Osteoporosis (OP) is a systemic skeletal disease characterized by a low bone mineral density (BMD) and a micro-architectural (MA) deterioration. Clinical risk factors (CRF) are often used as a MA approximation. MA is yet evaluable in daily practice by the trabecular bone score (TBS) measure. TBS is very simple to obtain, by reanalyzing a lumbar DXA-scan. TBS has proven to have diagnosis and prognosis values, partially independent of CRF and BMD. The aim of the OsteoLaus cohort is to combine in daily practice the CRF and the information given by DXA (BMD, TBS and vertebral fracture assessment (VFA)) to better identify women at high fracture risk. The OsteoLaus cohort (1400 women 50 to 80 years living in Lausanne, Switzerland) started in 2010. This study is derived from the cohort COLAUS who started in Lausanne in 2003. The main goal of COLAUS is to obtain information on the epidemiology and genetic determinants of cardiovascular risk in 6700 men and women. CRF for OP, bone ultrasound of the heel, lumbar spine and hip BMD, VFA by DXA and MA evaluation by TBS are recorded in OsteoLaus. Preliminary results are reported. We included 631 women: mean age 67.4 ± 6.7 years, BMI 26.1 ± 4.6, mean lumbar spine BMD 0.943 ± 0.168 (T-score − 1.4 SD), and TBS 1.271 ± 0.103. As expected, correlation between BMD and site matched TBS is low (r2 = 0.16). Prevalence of VFx grade 2/3, major OP Fx and all OP Fx is 8.4%, 17.0% and 26.0% respectively. Age- and BMI-adjusted ORs (per SD decrease) are 1.8 (1.2-2.5), 1.6 (1.2-2.1), and 1.3 (1.1-1.6) for BMD for the different categories of fractures and 2.0 (1.4-3.0), 1.9 (1.4-2.5), and 1.4 (1.1-1.7) for TBS respectively. Only 32 to 37% of women with OP Fx have a BMD < − 2.5 SD or a TBS < 1.200. If we combine a BMD < − 2.5 SD or a TBS < 1.200, 54 to 60% of women with an osteoporotic Fx are identified. As in the already published studies, these preliminary results confirm the partial independence between BMD and TBS. More importantly, a combination of TBS subsequent to BMD increases significantly the identification of women with prevalent OP Fx which would have been misclassified by BMD alone. For the first time we are able to have complementary information about fracture (VFA), density (BMD), micro- and macro architecture (TBS and HAS) from a simple, low ionizing radiation and cheap device: DXA. Such complementary information is very useful for the patient in the daily practice and moreover will likely have an impact on cost effectiveness analysis.
Treatment intensification and risk factor control: toward more clinically relevant quality measures.
Resumo:
BACKGROUND: Intensification of pharmacotherapy in persons with poorly controlled chronic conditions has been proposed as a clinically meaningful process measure of quality. OBJECTIVE: To validate measures of treatment intensification by evaluating their associations with subsequent control in hypertension, hyperlipidemia, and diabetes mellitus across 35 medical facility populations in Kaiser Permanente, Northern California. DESIGN: Hierarchical analyses of associations of improvements in facility-level treatment intensification rates from 2001 to 2003 with patient-level risk factor levels at the end of 2003. PATIENTS: Members (515,072 and 626,130; age >20 years) with hypertension, hyperlipidemia, and/or diabetes mellitus in 2001 and 2003, respectively. MEASUREMENTS: Treatment intensification for each risk factor defined as an increase in number of drug classes prescribed, of dosage for at least 1 drug, or switching to a drug from another class within 3 months of observed poor risk factor control. RESULTS: Facility-level improvements in treatment intensification rates between 2001 and 2003 were strongly associated with greater likelihood of being in control at the end of 2003 (P < or = 0.05 for each risk factor) after adjustment for patient- and facility-level covariates. Compared with facility rankings based solely on control, addition of percentages of poorly controlled patients who received treatment intensification changed 2003 rankings substantially: 14%, 51%, and 29% of the facilities changed ranks by 5 or more positions for hypertension, hyperlipidemia, and diabetes, respectively. CONCLUSIONS: Treatment intensification is tightly linked to improved control. Thus, it deserves consideration as a process measure for motivating quality improvement and possibly for measuring clinical performance.
Resumo:
IMPORTANCE: Associations between subclinical thyroid dysfunction and fractures are unclear and clinical trials are lacking. OBJECTIVE: To assess the association of subclinical thyroid dysfunction with hip, nonspine, spine, or any fractures. DATA SOURCES AND STUDY SELECTION: The databases of MEDLINE and EMBASE (inception to March 26, 2015) were searched without language restrictions for prospective cohort studies with thyroid function data and subsequent fractures. DATA EXTRACTION: Individual participant data were obtained from 13 prospective cohorts in the United States, Europe, Australia, and Japan. Levels of thyroid function were defined as euthyroidism (thyroid-stimulating hormone [TSH], 0.45-4.49 mIU/L), subclinical hyperthyroidism (TSH <0.45 mIU/L), and subclinical hypothyroidism (TSH ≥4.50-19.99 mIU/L) with normal thyroxine concentrations. MAIN OUTCOME AND MEASURES: The primary outcome was hip fracture. Any fractures, nonspine fractures, and clinical spine fractures were secondary outcomes. RESULTS: Among 70,298 participants, 4092 (5.8%) had subclinical hypothyroidism and 2219 (3.2%) had subclinical hyperthyroidism. During 762,401 person-years of follow-up, hip fracture occurred in 2975 participants (4.6%; 12 studies), any fracture in 2528 participants (9.0%; 8 studies), nonspine fracture in 2018 participants (8.4%; 8 studies), and spine fracture in 296 participants (1.3%; 6 studies). In age- and sex-adjusted analyses, the hazard ratio (HR) for subclinical hyperthyroidism vs euthyroidism was 1.36 for hip fracture (95% CI, 1.13-1.64; 146 events in 2082 participants vs 2534 in 56,471); for any fracture, HR was 1.28 (95% CI, 1.06-1.53; 121 events in 888 participants vs 2203 in 25,901); for nonspine fracture, HR was 1.16 (95% CI, 0.95-1.41; 107 events in 946 participants vs 1745 in 21,722); and for spine fracture, HR was 1.51 (95% CI, 0.93-2.45; 17 events in 732 participants vs 255 in 20,328). Lower TSH was associated with higher fracture rates: for TSH of less than 0.10 mIU/L, HR was 1.61 for hip fracture (95% CI, 1.21-2.15; 47 events in 510 participants); for any fracture, HR was 1.98 (95% CI, 1.41-2.78; 44 events in 212 participants); for nonspine fracture, HR was 1.61 (95% CI, 0.96-2.71; 32 events in 185 participants); and for spine fracture, HR was 3.57 (95% CI, 1.88-6.78; 8 events in 162 participants). Risks were similar after adjustment for other fracture risk factors. Endogenous subclinical hyperthyroidism (excluding thyroid medication users) was associated with HRs of 1.52 (95% CI, 1.19-1.93) for hip fracture, 1.42 (95% CI, 1.16-1.74) for any fracture, and 1.74 (95% CI, 1.01-2.99) for spine fracture. No association was found between subclinical hypothyroidism and fracture risk. CONCLUSIONS AND RELEVANCE: Subclinical hyperthyroidism was associated with an increased risk of hip and other fractures, particularly among those with TSH levels of less than 0.10 mIU/L and those with endogenous subclinical hyperthyroidism. Further study is needed to determine whether treating subclinical hyperthyroidism can prevent fractures.
Resumo:
BACKGROUND: Treatment of septic hand tenosynovitis is complex, and often requires multiple débridements and prolonged antibiotic therapy. The authors undertook this study to identify factors that might be associated with the need for subsequent débridement (after the initial one) because of persistence or secondary worsening of infection. METHODS: In this retrospective single-center study, the authors included all adult patients who presented to their emergency department from 2007 to 2010 with septic tenosynovitis of the hand. RESULTS: The authors identified 126 adult patients (55 men; median age, 45 years), nine of whom were immunosuppressed. All had community-acquired infection; 34 (27 percent) had a subcutaneous abscess and eight (6 percent) were febrile. All underwent at least one surgical débridement and had concomitant antibiotic therapy (median, 15 days; range, 7 to 82 days). At least one additional surgical intervention was required in 18 cases (median, 1.13 interventions; range, one to five interventions). All but four episodes (97 percent) were cured of infection on the first attempt after a median follow-up of 27 months. By multivariate analysis, only two factors were significantly associated with the outcome "subsequent surgical débridement": abscess (OR, 4.6; 95 percent CI, 1.5 to 14.0) and longer duration of antibiotic therapy (OR, 1.2; 95 percent CI, 1.1 to 1.2). CONCLUSION: In septic tenosynovitis of the hand, the only presenting factor that was statistically predictive of an increased risk of needing a second débridement was the presence of a subcutaneous abscess. CLINICAL QUESTION/LEVEL OF EVIDENCE: Risk, III.
Resumo:
BACKGROUND: Temporary increases in plasma HIV RNA ('blips') are common in HIV patients on combination antiretroviral therapy (cART). Blips above 500 copies/mL have been associated with subsequent viral rebound. It is not clear if this relationship still holds when measurements are made using newer more sensitive assays. METHODS: We selected antiretroviral-naive patients that then recorded one or more episodes of viral suppression on cART with HIV RNA measurements made using more sensitive assays (lower limit of detection below 50 copies/ml). We estimated the association in these episodes between blip magnitude and the time to viral rebound. RESULTS: Four thousand ninety-four patients recorded a first episode of viral suppression on cART using more sensitive assays; 1672 patients recorded at least one subsequent suppression episode. Most suppression episodes (87 %) were recorded with TaqMan version 1 or 2 assays. Of the 2035 blips recorded, 84 %, 12 % and 4 % were of low (50-199 copies/mL), medium (200-499 copies/mL) and high (500-999 copies/mL) magnitude respectively. The risk of viral rebound increased as blip magnitude increased with hazard ratios of 1.20 (95 % CI 0.89-1.61), 1.42 (95 % CI 0.96-2.19) and 1.93 (95 % CI 1.24-3.01) for low, medium and high magnitude blips respectively; an increase of hazard ratio 1.09 (95 % CI 1.03 to 1.15) per 100 copies/mL of HIV RNA. CONCLUSIONS: With the more sensitive assays now commonly used for monitoring patients, blips above 200 copies/mL are increasingly likely to lead to viral rebound and should prompt a discussion about adherence.
Resumo:
BACKGROUND: The diagnosis of Pulmonary Embolism (PE) in the emergency department (ED) is crucial. As emergency physicians fear missing this potential life-threatening condition, PE tends to be over-investigated, exposing patients to unnecessary risks and uncertain benefit in terms of outcome. The Pulmonary Embolism Rule-out Criteria (PERC) is an eight-item block of clinical criteria that can identify patients who can safely be discharged from the ED without further investigation for PE. The endorsement of this rule could markedly reduce the number of irradiative imaging studies, ED length of stay, and rate of adverse events resulting from both diagnostic and therapeutic interventions. Several retrospective and prospective studies have shown the safety and benefits of the PERC rule for PE diagnosis in low-risk patients, but the validity of this rule is still controversial. We hypothesize that in European patients with a low gestalt clinical probability and who are PERC-negative, PE can be safely ruled out and the patient discharged without further testing. METHODS/DESIGN: This is a controlled, cluster randomized trial, in 15 centers in France. Each center will be randomized for the sequence of intervention periods: a 6-month intervention period (PERC-based strategy) followed by a 6-month control period (usual care), or in reverse order, with 2 months of "wash-out" between the 2 periods. Adult patients presenting to the ED with a suspicion of PE and a low pre test probability estimated by clinical gestalt will be eligible. The primary outcome is the percentage of failure resulting from the diagnostic strategy, defined as diagnosed venous thromboembolic events at 3-month follow-up, among patients for whom PE has been initially ruled out. DISCUSSION: The PERC rule has the potential to decrease the number of irradiative imaging studies in the ED, and is reported to be safe. However, no randomized study has ever validated the safety of PERC. Furthermore, some studies have challenged the safety of a PERC-based strategy to rule-out PE, especially in Europe where the prevalence of PE diagnosed in the ED is high. The PROPER study should provide high-quality evidence to settle this issue. If it confirms the safety of the PERC rule, physicians will be able to reduce the number of investigations, associated subsequent adverse events, costs, and ED length of stay for patients with a low clinical probability of PE. TRIAL REGISTRATION: NCT02375919 .
Resumo:
Alcohol misuse is the leading cause of cirrhosis and the second most common indication for liver transplantation in the Western world. We performed a genome-wide association study for alcohol-related cirrhosis in individuals of European descent (712 cases and 1,426 controls) with subsequent validation in two independent European cohorts (1,148 cases and 922 controls). We identified variants in the MBOAT7 (P = 1.03 × 10(-9)) and TM6SF2 (P = 7.89 × 10(-10)) genes as new risk loci and confirmed rs738409 in PNPLA3 as an important risk locus for alcohol-related cirrhosis (P = 1.54 × 10(-48)) at a genome-wide level of significance. These three loci have a role in lipid processing, suggesting that lipid turnover is important in the pathogenesis of alcohol-related cirrhosis.