139 resultados para observational study
Resumo:
PURPOSE: Venlafaxine has shown benefit in the treatment of depression and pain. Worldwide data are extensively lacking investigating the outcome of chronic pain patients with depressive symptoms treated by venlafaxine in the primary care setting. This observational study aimed to elucidate the efficacy of venlafaxine and its prescription by Swiss primary care physicians and psychiatrists in patients with chronic pain and depressive symptomatology. SUBJECTS AND METHODS: We studied 505 patients with depressive symptoms suffering from chronic pain in a prospective naturalistic Swiss community based observational trial with venlafaxine in primary care. These patients have been treated with venlafaxine by 122 physicians, namely psychiatrists, general practitioners, and internists. RESULTS: On average, patients were treated with 143+/-75 mg (0-450 mg) venlafaxine daily for a follow-up of three months. Venlafaxine proved to be beneficial in the treatment of both depressive symptoms and chronic pain. DISCUSSION: Although side effects were absent in most patients, physicians might have frequently omitted satisfactory response rate of depression by underdosing venlafaxine. Our results reflect the complexity in the treatment of chronic pain in patients with depressive symptoms in primary care. CONCLUSION: Further randomized dose-finding studies are needed to learn more about the appropriate dosage in treating depression and comorbid pain with venlafaxine.
Resumo:
Background This study is part of a nationwide evaluation of complementary medicine in Switzerland (Programme Evaluation of Complementary Medicine PEK) and was funded by the Swiss Federal Office of Public Health. The main objective of this study is to investigate patient satisfaction and perception of side effects in homeopathy compared with conventional care in a primary care setting. Methods We examined data from two cross-sectional studies conducted in 2002–2003. The first study was a physician questionnaire assessing structural characteristics of practices. The second study was conducted on four given days during a 12-month period in 2002/2003 using a physician and patient questionnaire at consultation and a patient questionnaire mailed to the patient one month later (including Europep questionnaire). The participating physicians were all trained and licensed in conventional medicine. An additional qualification was required for medical doctors providing homeopathy (membership in the Swiss association of homeopathic physicians SVHA). Results A total of 6778 adult patients received the questionnaire and 3126 responded (46.1%). Statistically significant differences were found with respect to health status (higher percentage of chronic and severe conditions in the homeopathic group), perception of side effects (higher percentage of reported side effects in the conventional group) and patient satisfaction (higher percentage of satisfied patients in the homeopathic group). Conclusion Overall patient satisfaction was significantly higher in homeopathic than in conventional care. Homeopathic treatments were perceived as a low-risk therapy with two to three times fewer side effects than conventional care
Resumo:
INTRODUCTION: Sedative and analgesic drugs are frequently used in critically ill patients. Their overuse may prolong mechanical ventilation and length of stay in the intensive care unit. Guidelines recommend use of sedation protocols that include sedation scores and trials of sedation cessation to minimize drug use. We evaluated processed electroencephalography (response and state entropy and bispectral index) as an adjunct to monitoring effects of commonly used sedative and analgesic drugs and intratracheal suctioning. METHODS: Electrodes for monitoring bispectral index and entropy were placed on the foreheads of 44 critically ill patients requiring mechanical ventilation and who previously had no brain dysfunction. Sedation was targeted individually using the Ramsay Sedation Scale, recorded every 2 hours or more frequently. Use of and indications for sedative and analgesic drugs and intratracheal suctioning were recorded manually and using a camera. At the end of the study, processed electroencephalographical and haemodynamic variables collected before and after each drug application and tracheal suctioning were analyzed. Ramsay score was used for comparison with processed electroencephalography when assessed within 15 minutes of an intervention. RESULTS: The indications for boli of sedative drugs exhibited statistically significant, albeit clinically irrelevant, differences in terms of their association with processed electroencephalographical parameters. Electroencephalographical variables decreased significantly after bolus, but a specific pattern in electroencephalographical variables before drug administration was not identified. The same was true for opiate administration. At both 30 minutes and 2 minutes before intratracheal suctioning, there was no difference in electroencephalographical or clinical signs in patients who had or had not received drugs 10 minutes before suctioning. Among patients who received drugs, electroencephalographical parameters returned to baseline more rapidly. In those cases in which Ramsay score was assessed before the event, processed electroencephalography exhibited high variation. CONCLUSIONS: Unpleasant or painful stimuli and sedative and analgesic drugs are associated with significant changes in processed electroencephalographical parameters. However, clinical indications for drug administration were not reflected by these electroencephalographical parameters, and barely by sedation level before drug administration or tracheal suction. This precludes incorporation of entropy and bispectral index as target variables for sedation and analgesia protocols in critically ill patients.
Resumo:
BACKGROUND: This study is part of a cross-sectional evaluation of complementary medicine providers in primary care in Switzerland. It compares patient satisfaction with anthroposophic medicine (AM) and conventional medicine (CON). METHODS: We collected baseline data on structural characteristics of the physicians and their practices and health status and demographics of the patients. Four weeks later patients assessed their satisfaction with the received treatment (five items, four point rating scale) and evaluated the praxis care (validated 23-item questionnaire, five point rating scale). 1946 adult patients of 71 CON and 32 AM primary care physicians participated. RESULTS: 1. Baseline characteristics: AM patients were more likely female (75.6% vs. 59.0%, p < 0.001) and had higher education (38.6% vs. 24.7%, p < 0.001). They suffered more often from chronic illnesses (52.8% vs. 46.2%, p = 0.015) and cancer (7.4% vs. 1.1%). AM consultations lasted on average 23,3 minutes (CON: 16,8 minutes, p < 0.001). 2. Satisfaction: More AM patients expressed a general treatment satisfaction (56.1% vs. 43.4%, p < 0.001) and saw their expectations completely fulfilled at follow-up (38.7% vs. 32.6%, p < 0.001). AM patients reported significantly fewer adverse side effects (9.3% vs. 15.4%, p = 0.003), and more other positive effects from treatment (31.7% vs. 17.1%, p < 0.001). Europep: AM patients appreciated that their physicians listened to them (80.0% vs. 67.1%, p < 0.001), spent more time (76.5% vs. 61.7%, p < 0.001), had more interest in their personal situation (74.6% vs. 60.3%, p < 0.001), involved them more in decisions about their medical care (67.8% vs. 58.4%, p = 0.022), and made it easy to tell the physician about their problems (71.6% vs. 62.9%, p = 0.023). AM patients gave significantly better rating as to information and support (in 3 of 4 items p [less than or equal to] 0.044) and for thoroughness (70.4% vs. 56.5%, p < 0.001). CONCLUSION: AM patients were significantly more satisfied and rated their physicians as valuable partners in the treatment. This suggests that subject to certain limitations, AM therapy may be beneficial in primary care. To confirm this, more detailed qualitative studies would be necessary.
Resumo:
To study the time course of demineralization and fracture incidence after spinal cord injury (SCI), 100 paraplegic men with complete motor loss were investigated in a cross-sectional study 3 months to 30 years after their traumatic SCI. Fracture history was assessed and verified using patients' files and X-rays. BMD of the lumbar spine (LS), femoral neck (FN), distal forearm (ultradistal part = UDR, 1/3 distal part = 1/3R), distal tibial diaphysis (TDIA), and distal tibial epiphysis (TEPI) was measured using DXA. Stiffness of the calcaneus (QUI.CALC), speed of sound of the tibia (SOS.TIB), and amplitude-dependent SOS across the proximal phalanges (adSOS.PHAL) were measured using QUS. Z-Scores of BMD and quantitative ultrasound (QUS) were plotted against time-since-injury and compared among four groups of paraplegics stratified according to time-since-injury (<1 year, stratum I; 1-9 years, stratum II; 10-19 years, stratum III; 20-29 years, stratum IV). Biochemical markers of bone turnover (deoxypyridinoline/creatinine (D-pyr/Cr), osteocalcin, alkaline phosphatase) and the main parameters of calcium phosphate metabolism were measured. Fifteen out of 98 paraplegics had sustained a total of 39 fragility fractures within 1,010 years of observation. All recorded fractures were fractures of the lower limbs, mean time to first fracture being 8.9 +/- 1.4 years. Fracture incidence increased with time-after-SCI, from 1% in the first 12 months to 4.6%/year in paraplegics since >20 years ( p<.01). The overall fracture incidence was 2.2%/year. Compared with nonfractured paraplegics, those with a fracture history had been injured for a longer time ( p<.01). Furthermore, they had lower Z-scores at FN, TEPI, and TDIA ( p<.01 to <.0001), the largest difference being observed at TDIA, compared with the nonfractured. At the lower limbs, BMD decreased with time at all sites ( r=.49 to.78, all p<.0001). At FN and TEPI, bone loss followed a log curve which leveled off between 1 to 3 years after injury. In contrast, Z-scores of TDIA continuously decreased even beyond 10 years after injury. LS BMD Z-score increased with time-since-SCI ( p<.05). Similarly to DXA, QUS allowed differentiation of early and rapid trabecular bone loss (QUI.CALC) vs slow and continuous cortical bone loss (SOS.TIB). Biochemical markers reflected a disproportion between highly elevated bone resorption and almost normal bone formation early after injury. Turnover declined following a log curve with time-after-SCI, however, D-pyr/Cr remained elevated in 30% of paraplegics injured >10 years. In paraplegic men early (trabecular) and persistent (cortical) bone loss occurs at the lower limbs and leads to an increasing fracture incidence with time-after-SCI.
Resumo:
BACKGROUND Partner notification is essential to the comprehensive case management of sexually transmitted infections. Systematic reviews and mathematical modelling can be used to synthesise information about the effects of new interventions to enhance the outcomes of partner notification. OBJECTIVE To study the effectiveness and cost-effectiveness of traditional and new partner notification technologies for curable sexually transmitted infections (STIs). DESIGN Secondary data analysis of clinical audit data; systematic reviews of randomised controlled trials (MEDLINE, EMBASE and Cochrane Central Register of Controlled Trials) published from 1 January 1966 to 31 August 2012 and of studies of health-related quality of life (HRQL) [MEDLINE, EMBASE, ISI Web of Knowledge, NHS Economic Evaluation Database (NHS EED), Database of Abstracts of Reviews of Effects (DARE) and Health Technology Assessment (HTA)] published from 1 January 1980 to 31 December 2011; static models of clinical effectiveness and cost-effectiveness; and dynamic modelling studies to improve parameter estimation and examine effectiveness. SETTING General population and genitourinary medicine clinic attenders. PARTICIPANTS Heterosexual women and men. INTERVENTIONS Traditional partner notification by patient or provider referral, and new partner notification by expedited partner therapy (EPT) or its UK equivalent, accelerated partner therapy (APT). MAIN OUTCOME MEASURES Population prevalence; index case reinfection; and partners treated per index case. RESULTS Enhanced partner therapy reduced reinfection in index cases with curable STIs more than simple patient referral [risk ratio (RR) 0.71; 95% confidence interval (CI) 0.56 to 0.89]. There are no randomised trials of APT. The median number of partners treated for chlamydia per index case in UK clinics was 0.60. The number of partners needed to treat to interrupt transmission of chlamydia was lower for casual than for regular partners. In dynamic model simulations, > 10% of partners are chlamydia positive with look-back periods of up to 18 months. In the presence of a chlamydia screening programme that reduces population prevalence, treatment of current partners achieves most of the additional reduction in prevalence attributable to partner notification. Dynamic model simulations show that cotesting and treatment for chlamydia and gonorrhoea reduce the prevalence of both STIs. APT has a limited additional effect on prevalence but reduces the rate of index case reinfection. Published quality-adjusted life-year (QALY) weights were of insufficient quality to be used in a cost-effectiveness study of partner notification in this project. Using an intermediate outcome of cost per infection diagnosed, doubling the efficacy of partner notification from 0.4 to 0.8 partners treated per index case was more cost-effective than increasing chlamydia screening coverage. CONCLUSIONS There is evidence to support the improved clinical effectiveness of EPT in reducing index case reinfection. In a general heterosexual population, partner notification identifies new infected cases but the impact on chlamydia prevalence is limited. Partner notification to notify casual partners might have a greater impact than for regular partners in genitourinary clinic populations. Recommendations for future research are (1) to conduct randomised controlled trials using biological outcomes of the effectiveness of APT and of methods to increase testing for human immunodeficiency virus (HIV) and STIs after APT; (2) collection of HRQL data should be a priority to determine QALYs associated with the sequelae of curable STIs; and (3) standardised parameter sets for curable STIs should be developed for mathematical models of STI transmission that are used for policy-making. FUNDING The National Institute for Health Research Health Technology Assessment programme.
Resumo:
We investigated the effects of dalfampridine, the sustained-release form of 4-aminopyridine, on slow phase velocity (SPV) and visual acuity (VA) in patients with downbeat nystagmus (DBN) and the side effects of the drug. In this proof-of-principle observational study, ten patients received dalfampridine 10 mg bid for 2 weeks. Recordings were conducted at baseline, 180 min after first administration, after 2 weeks of treatment and after 4 weeks of wash-out. Mean SPV decreased from a baseline of 2.12 deg/s ± 1.72 (mean ± SD) to 0.51 deg/s ± 1.00 180 min after first administration of dalfampridine 10 mg and to 0.89 deg/s ± 0.75 after 2 weeks of treatment with dalfampridine (p < 0.05; post hoc both: p < 0.05). After a wash-out period of 1 week, mean SPV increased to 2.30 deg/s ± 1.6 (p < 0.05; post hoc both: p < 0.05). The VA significantly improved during treatment with dalfampridine. Also, 50 % of patients did not report any side effects. The most common reported side effects were abdominal discomfort and dizziness. Dalfampridine is an effective treatment for DBN in terms of SPV. It was well-tolerated in all patients.
Resumo:
BackgroundAcute cough is a common problem in general practice and is often caused by a self-limiting, viral infection. Nonetheless, antibiotics are often prescribed in this situation, which may lead to unnecessary side effects and, even worse, the development of antibiotic resistant microorganisms worldwide. This study assessed the role of point-of-care C-reactive protein (CRP) testing and other predictors of antibiotic prescription in patients who present with acute cough in general practice.MethodsPatient characteristics, symptoms, signs, and laboratory and X-ray findings from 348 patients presenting to 39 general practitioners with acute cough, as well as the GPs themselves, were recorded by fourth-year medical students during their three-week clerkships in general practice. Patient and clinician characteristics of those prescribed and not-prescribed antibiotics were compared using a mixed-effects model.ResultsOf 315 patients included in the study, 22% were prescribed antibiotics. The two groups of patients, those prescribed antibiotics and those treated symptomatically, differed significantly in age, demand for antibiotics, days of cough, rhinitis, lung auscultation, haemoglobin level, white blood cell count, CRP level and the GP¿s license to self-dispense antibiotics. After regression analysis, only the CRP level, the white blood cell count and the duration of the symptoms were statistically significant predictors of antibiotic prescription.ConclusionsThe antibiotic prescription rate of 22% in adult patients with acute cough in the Swiss primary care setting is low compared to other countries. GPs appear to use point-of-care CRP testing in addition to the duration of clinical symptoms to help them decide whether or not to prescribe antibiotics.
Resumo:
BACKGROUND After cardiac surgery with cardiopulmonary bypass (CPB), acquired coagulopathy often leads to post-CPB bleeding. Though multifactorial in origin, this coagulopathy is often aggravated by deficient fibrinogen levels. OBJECTIVE To assess whether laboratory and thrombelastometric testing on CPB can predict plasma fibrinogen immediately after CPB weaning. PATIENTS / METHODS This prospective study in 110 patients undergoing major cardiovascular surgery at risk of post-CPB bleeding compares fibrinogen level (Clauss method) and function (fibrin-specific thrombelastometry) in order to study the predictability of their course early after termination of CPB. Linear regression analysis and receiver operating characteristics were used to determine correlations and predictive accuracy. RESULTS Quantitative estimation of post-CPB Clauss fibrinogen from on-CPB fibrinogen was feasible with small bias (+0.19 g/l), but with poor precision and a percentage of error >30%. A clinically useful alternative approach was developed by using on-CPB A10 to predict a Clauss fibrinogen range of interest instead of a discrete level. An on-CPB A10 ≤10 mm identified patients with a post-CPB Clauss fibrinogen of ≤1.5 g/l with a sensitivity of 0.99 and a positive predictive value of 0.60; it also identified those without a post-CPB Clauss fibrinogen <2.0 g/l with a specificity of 0.83. CONCLUSIONS When measured on CPB prior to weaning, a FIBTEM A10 ≤10 mm is an early alert for post-CPB fibrinogen levels below or within the substitution range (1.5-2.0 g/l) recommended in case of post-CPB coagulopathic bleeding. This helps to minimize the delay to data-based hemostatic management after weaning from CPB.
Resumo:
BACKGROUND General practitioners (GPs) are in best position to suspect dementia. Mini-Mental State Examination (MMSE) and Clock Drawing Test (CDT) are widely used. Additional neurological tests may increase the accuracy of diagnosis. We aimed to evaluate diagnostic ability to detect dementia with a Short Smell Test (SST) and Palmo-Mental Reflex (PMR) in patients whose MMSE and CDT are normal, but who show signs of cognitive dysfunction. METHODS This was a 3.5-year cross-sectional observational study in the Memory Clinic of the University Department of Geriatrics in Bern, Switzerland. Participating patients with normal MMSE (>26 points) and CDT (>5 points) were referred by GPs because they suspected dementia. All were examined according to a standardized protocol. Diagnosis of dementia was based on DSM-IV TR criteria. We used SST and PMR to determine if they accurately detected dementia. RESULTS In our cohort, 154 patients suspected of dementia had normal MMSE and CDT test results. Of these, 17 (11 %) were demented. If SST or PMR were abnormal, sensitivity was 71 % (95 % CI 44-90 %), and specificity 64 % (95 % CI 55-72 %) for detecting dementia. If both tests were abnormal, sensitivity was 24 % (95 % CI 7-50 %), but specificity increased to 93 % (95 % CI 88-97 %). CONCLUSION Patients suspected of dementia, but with normal MMSE and CDT results, may benefit if SST and PMR are added as diagnostic tools. If both SST and PMR are abnormal, this is a red flag to investigate these patients further, even though their negative neuropsychological screening results.
Resumo:
BACKGROUND Chronic postsurgical pain (CPSP) is an important clinical problem. Prospective studies of the incidence, characteristics and risk factors of CPSP are needed. OBJECTIVES The objective of this study is to evaluate the incidence and risk factors of CPSP. DESIGN A multicentre, prospective, observational trial. SETTING Twenty-one hospitals in 11 European countries. PATIENTS Three thousand one hundred and twenty patients undergoing surgery and enrolled in the European registry PAIN OUT. MAIN OUTCOME MEASURES Pain-related outcome was evaluated on the first postoperative day (D1) using a standardised pain outcome questionnaire. Review at 6 and 12 months via e-mail or telephonic interview used the Brief Pain Inventory (BPI) and the DN4 (Douleur Neuropathique four questions). Primary endpoint was the incidence of moderate to severe CPSP (numeric rating scale, NRS ≥3/10) at 12 months. RESULTS For 1044 and 889 patients, complete data were available at 6 and 12 months. At 12 months, the incidence of moderate to severe CPSP was 11.8% (95% CI 9.7 to 13.9) and of severe pain (NRS ≥6) 2.2% (95% CI 1.2 to 3.3). Signs of neuropathic pain were recorded in 35.4% (95% CI 23.9 to 48.3) and 57.1% (95% CI 30.7 to 83.4) of patients with moderate and severe CPSP, respectively. Functional impairment (BPI) at 6 and 12 months increased with the severity of CPSP (P < 0.01) and presence of neuropathic characteristics (P < 0.001). Multivariate analysis identified orthopaedic surgery, preoperative chronic pain and percentage of time in severe pain on D1 as risk factors. A 10% increase in percentage of time in severe pain was associated with a 30% increase of CPSP incidence at 12 months. CONCLUSION The collection of data on CPSP was feasible within the European registry PAIN OUT. The incidence of moderate to severe CPSP at 12 months was 11.8%. Functional impairment was associated with CPSP severity and neuropathic characteristics. Risk factors for CPSP in the present study were chronic preoperative pain, orthopaedic surgery and percentage of time in severe pain on D1. TRIAL REGISTRATION Clinicaltrials.gov identifier: NCT01467102.
Resumo:
OBJECTIVES The aim of this study was to assess the safety of the concurrent administration of a clopidogrel and prasugrel loading dose in patients undergoing primary percutaneous coronary intervention. BACKGROUND Prasugrel is one of the preferred P2Y12 platelet receptor antagonists for ST-segment elevation myocardial infarction patients. The use of prasugrel was evaluated clinically in clopidogrel-naive patients. METHODS Between September 2009 and October 2012, a total of 2,023 STEMI patients were enrolled in the COMFORTABLE (Comparison of Biomatrix Versus Gazelle in ST-Elevation Myocardial Infarction [STEMI]) and the SPUM-ACS (Inflammation and Acute Coronary Syndromes) studies. Patients receiving a prasugrel loading dose were divided into 2 groups: 1) clopidogrel and a subsequent prasugrel loading dose; and 2) a prasugrel loading dose. The primary safety endpoint was Bleeding Academic Research Consortium types 3 to 5 bleeding in hospital at 30 days. RESULTS Of 2,023 patients undergoing primary percutaneous coronary intervention, 427 (21.1%) received clopidogrel and a subsequent prasugrel loading dose, 447 (22.1%) received a prasugrel loading dose alone, and the remaining received clopidogrel only. At 30 days, the primary safety endpoint was observed in 1.9% of those receiving clopidogrel and a subsequent prasugrel loading dose and 3.4% of those receiving a prasugrel loading dose alone (adjusted hazard ratio [HR]: 0.57; 95% confidence interval [CI]: 0.25 to 1.30, p = 0.18). The HAS-BLED (hypertension, abnormal renal/liver function, stroke, bleeding history or predisposition, labile international normalized ratio, elderly, drugs/alcohol concomitantly) bleeding score tended to be higher in prasugrel-treated patients (p = 0.076). The primary safety endpoint results, however, remained unchanged after adjustment for these differences (clopidogrel and a subsequent prasugrel loading dose vs. prasugrel only; HR: 0.54 [95% CI: 0.23 to 1.27], p = 0.16). No differences in the composite of cardiac death, myocardial infarction, or stroke were observed at 30 days (adjusted HR: 0.66, 95% CI: 0.27 to 1.62, p = 0.36). CONCLUSIONS This observational, nonrandomized study of ST-segment elevation myocardial infarction patients suggests that the administration of a loading dose of prasugrel in patients pre-treated with a loading dose of clopidogrel is not associated with an excess of major bleeding events. (Comparison of Biomatrix Versus Gazelle in ST-Elevation Myocardial Infarction [STEMI] [COMFORTABLE]; NCT00962416; and Inflammation and Acute Coronary Syndromes [SPUM-ACS]; NCT01000701).
Resumo:
BACKGROUND The objective of the study was to evaluate the implications of different classifications of rheumatic heart disease on estimated prevalence, and to systematically assess the importance of incidental findings from echocardiographic screening among schoolchildren in Peru. METHODS We performed a cluster randomized observational survey using portable echocardiography among schoolchildren aged 5 to 16 years from randomly selected public and private schools in Arequipa, Peru. Rheumatic heart disease was defined according to the modified World Health Organization (WHO) criteria and the World Heart Federation (WHF) criteria. FINDINGS Among 1395 eligible students from 40 classes and 20 schools, 1023 (73%) participated in the present survey. The median age of the children was 11 years (interquartile range [IQR] 8-13 years) and 50% were girls. Prevalence of possible, probable and definite rheumatic heart disease according to the modified WHO criteria amounted to 19.7/1000 children and ranged from 10.2/1000 among children 5 to 8 years of age to 39.8/1000 among children 13 to 16 years of age; the prevalence of borderline/definite rheumatic heart disease according to the WHF criteria was 3.9/1000 children. 21 children (2.1%) were found to have congenital heart disease, 8 of which were referred for percutaneous or surgical intervention. CONCLUSIONS Prevalence of RHD in Peru was considerably lower compared to endemic regions in sub-Saharan Africa, southeast Asia, and Oceania; and paralleled by a comparable number of undetected congenital heart disease. Strategies to address collateral findings from echocardiographic screening are necessary in the setup of active surveillance programs for RHD. TRIAL REGISTRATION ClinicalTrials.gov identifier: NCT02353663.
Resumo:
BACKGROUND Increasing evidence suggests that psychosocial factors, including depression predict incident venous thromboembolism (VTE) against a background of genetic and acquired risk factors. The role of psychosocial factors for the risk of recurrent VTE has not previously been examined. We hypothesized that depressive symptoms in patients with prior VTE are associated with an increased risk of recurrent VTE. METHODS In this longitudinal observational study, we investigated 271 consecutive patients, aged 18 years or older, referred for thrombophilia investigation with an objectively diagnosed episode of VTE. Patients completed the depression subscale of the Hospital Anxiety and Depression Scale (HADS-D). During the observation period, they were contacted by phone and information on recurrent VTE, anticoagulation therapy, and thromboprophylaxis in risk situations was collected. RESULTS Clinically relevant depressive symptoms (HADS-D score ≥ 8) were present in 10% of patients. During a median observation period of 13 months (range 5-48), 27 (10%) patients experienced recurrent VTE. After controlling for sociodemographic and clinical factors, a 3-point increase on the HADS-D score was associated with a 44% greater risk of recurrent VTE (OR 1.44, 95% CI 1.02, 2.06). Compared to patients with lower levels of depressive symptoms (HADS-D score: range 0-2), those with higher levels (HADS-D score: range 3-16) had a 4.1-times greater risk of recurrent VTE (OR 4.07, 95% CI 1.55, 10.66). CONCLUSIONS The findings suggest that depressive symptoms might contribute to an increased risk of recurrent VTE independent of other prognostic factors. An increased risk might already be present at subclinical levels of depressive symptoms.