896 resultados para Long-term Users
Resumo:
Eighty one percent of a sample of long-term cannabis users was followed up at 1 year (162/200). Half (51%) were daily smokers, while 20% had substantially decreased or ceased use. More than half received a dependence diagnosis on each of three measures in the last year, with 44% dependent on all three. Remission was much more common than incidence of dependence. Nevertheless, use and dependence patterns were strongly related over time. Longitudinal analyses revealed that quantity of use and severity of dependence at baseline were the primary predictors of those same variables at follow-up. These data suggest that cannabis use and dependence are fairly stable among long-term users. (C) 2000 Elsevier Science Ireland Ltd. All rights reserved.
Resumo:
BACKGROUND: Antidepressants are one of the most commonly prescribed drugs in primary care. The rise in use is mostly due to an increasing number of long-term users of antidepressants (LTU AD). Little is known about the factors driving increased long-term use. We examined the socio-demographic, clinical factors and health service use characteristics associated with LTU AD to extend our understanding of the factors that may be driving the increase in antidepressant use. METHODS: Cross-sectional analysis of 789 participants with probable depression (CES-D≥16) recruited from 30 randomly selected Australian general practices to take part in a ten-year cohort study about depression were surveyed about their antidepressant use. RESULTS: 165 (21.0%) participants reported <2 years of antidepressant use and 145 (18.4%) reported ≥2 years of antidepressant use. After adjusting for depression severity, LTU AD was associated with: single (OR 1.56, 95%CI 1.05-2.32) or recurrent episode of depression (3.44, 2.06-5.74); using SSRIs (3.85, 2.03-7.33), sedatives (2.04, 1.29-3.22), or antipsychotics (4.51, 1.67-12.17); functional limitations due to long-term illness (2.81, 1.55-5.08), poor/fair self-rated health (1.57, 1.14-2.15), inability to work (2.49, 1.37-4.53), benefits as main source of income (2.15, 1.33-3.49), GP visits longer than 20min (1.79, 1.17-2.73); rating GP visits as moderately to extremely helpful (2.71, 1.79-4.11), and more self-help practices (1.16, 1.09-1.23). LIMITATIONS: All measures were self-report. Sample may not be representative of culturally different or adolescent populations. Cross-sectional design raises possibility of "confounding by indication". CONCLUSIONS: Long-term antidepressant use is relatively common in primary care. It occurs within the context of complex mental, physical and social morbidities. Whilst most long-term use is associated with a history of recurrent depression there remains a significant opportunity for treatment re-evaluation and timely discontinuation.
Resumo:
Benzodiazepines (BZD) and benzodiazepine related drugs (RD) are the most commonly used psychotropics among the aged. The use of other psychotropics taken concomitantly with BZD/ RD or their cognitive effects with BZD/RD have not been studied frequently. The aim of this academic thesis was to describe and analyse relationships between the use of BZD/RD alone or concomitantly with antipsychotics, antidepressants, opioids, antiepileptics, opioids and anticholinergics in the aged and their health. Especially, the relationships between long-term use of BZD/RD and cognitive decline were studied. Additionally, the effect of melatonin on BZD/RD withdrawal and the cognitive effects of BZD/RD withdrawal were studied. This study used multiple data sets: the first study (I) was based on clinical data containing aged patients (≥65 years; N=164) admitted to Pori City Hospital due to acute disease. The second data set (Studies II and III) was based on population-based data from the Lieto Study, a clinico-epidemiological longitudinal study carried out among the aged (≥65 years) in the municipality of Lieto. Follow-up data was formed by combining the cohort data collected in 1990-1991 (N=1283) and in 1998-1999 (N=1596) from those who participated in both cohorts (N=617). The third data set (Studies IV and V) was based on the Satauni Study’s data. This study was performed in the City of Pori in 2009-2010. In the RCT part of the Satauni Study, ninety-two long-term users of BZD/RD were withdrawn from their drugs using melatonin against placebo. The change of their cognitive abilities was measured during and after BZD/ RD withdrawal. BZD/RD use was related to worse cognitive and functional abilities, and their use may predict worse cognitive outcomes compared with BZD/RD non-users. Hypnotic use of BZD/RD could be withdrawn with psychosocial support in motivated participants, but melatonin did not improve the withdrawal results compared to those with placebo. Cognitive abilities in psychomotor tests did not show, or showed only modest, improvements for up to six months after BZD/RD withdrawal. This suggests that the cognitive effects of BZD/RD may be longlasting or permanent.
Resumo:
Long-term antidepressant treatment has increased and there is evidence of adverse effects; however, little is known about patients’ experiences and views of this form of treatment.This study used mixed methods to examine patients’ views and experiences of long-term antidepressant treatment, including benefits and concerns. Data from 180 patients, who were long-term users of antidepressants (3–15 years), were extracted from an anonymous online survey of patients’ experiences of antidepressants in New Zealand. Participants had completed rating scales about the effectiveness of antidepressants, levels of depression before and during antidepressant use, quality of life, and perceived adverse effects. Two open-ended questions allowed participants to comment on personal experiences. The majority (89.4%) reported that antidepressants had improved their depression although 30% reported moderate-to-severe depression on antidepressants. Common adverse effects included withdrawal effects (73.5%), sexual problems (71.8%), and weight gain (65.3%). Adverse emotional effects, such as feeling emotionally numb (64.5%) and addicted (43%), were also common. While the majority of patients were pleased with the benefits of antidepressant treatment, many were concerned about these adverse effects. Some expressed a need for more information about long-term risks and increased information and support to discontinue.
Resumo:
Two hundred long-term cannabis users (58% male) were interviewed on their characteristics and experience of use. Respondents had been regularly using cannabis for an average of 11 years and more than half used daily (56%). The most common route of administration was in a waterpipe, and nearly all (93%) smoked the flowering heads ot the plant. One in 5 (21%) had a cannabis-related conviction. The benefits of use were perceived to be its relaxing, mood-enhancing effects, and its ability to alter consciousness. The most commonly cited negative aspects of use were cost, negative psychological effects and legal status. Polydrug use was common, with alcohol and tobacco almost universally used on a regular basis. More than half the drinkers in the sample were consuming alcohol at hazardous or harmful levels.
Resumo:
What is the contribution of the provision, at no cost for users, of long acting reversible contraceptive methods (LARC; copper intrauterine device [IUD], the levonorgestrel-releasing intrauterine system [LNG-IUS], contraceptive implants and depot-medroxyprogesterone [DMPA] injection) towards the disability-adjusted life years (DALY) averted through a Brazilian university-based clinic established over 30 years ago. Over the last 10 years of evaluation, provision of LARC methods and DMPA by the clinic are estimated to have contributed to DALY averted by between 37 and 60 maternal deaths, 315-424 child mortalities, 634-853 combined maternal morbidity and mortality and child mortality, and 1056-1412 unsafe abortions averted. LARC methods are associated with a high contraceptive effectiveness when compared with contraceptive methods which need frequent attention; perhaps because LARC methods are independent of individual or couple compliance. However, in general previous studies have evaluated contraceptive methods during clinical studies over a short period of time, or not more than 10 years. Furthermore, information regarding the estimation of the DALY averted is scarce. We reviewed 50 004 medical charts from women who consulted for the first time looking for a contraceptive method over the period from 2 January 1980 through 31 December 2012. Women who consulted at the Department of Obstetrics and Gynaecology, University of Campinas, Brazil were new users and users switching contraceptive, including the copper IUD (n = 13 826), the LNG-IUS (n = 1525), implants (n = 277) and DMPA (n = 9387). Estimation of the DALY averted included maternal morbidity and mortality, child mortality and unsafe abortions averted. We obtained 29 416 contraceptive segments of use including 25 009 contraceptive segments of use from 20 821 new users or switchers to any LARC method or DMPA with at least 1 year of follow-up. The mean (± SD) age of the women at first consultation ranged from 25.3 ± 5.7 (range 12-47) years in the 1980s, to 31.9 ± 7.4 (range 16-50) years in 2010-2011. The most common contraceptive chosen at the first consultation was copper IUD (48.3, 74.5 and 64.7% in the 1980s, 1990s and 2000s, respectively). For an evaluation over 20 years, the cumulative pregnancy rates (SEM) were 0.4 (0.2), 2.8 (2.1), 4.0 (0.4) and 1.3 (0.4) for the LNG-IUS, the implants, copper IUD and DMPA, respectively and cumulative continuation rates (SEM) were 15.1 (3.7), 3.9 (1.4), 14.1 (0.6) and 7.3 (1.7) for the LNG-IUS, implants, copper IUD and DMPA, respectively (P < 0.001). Over the last 10 years of evaluation, the estimation of the contribution of the clinic through the provision of LARC methods and DMPA to DALY averted was 37-60 maternal deaths; between 315 and 424 child mortalities; combined maternal morbidity and mortality and child mortality of between 634 and 853, and 1056-1412 unsafe abortions averted. The main limitations are the number of women who never returned to the clinic (overall 14% among the four methods under evaluation); consequently the pregnancy rate could be different. Other limitations include the analysis of two kinds of copper IUD and two kinds of contraceptive implants as the same IUD or implant, and the low number of users of implants. In addition, the DALY calculation relies on a number of estimates, which may vary in different parts of the world. LARC methods and DMPA are highly effective and women who were well-counselled used these methods for a long time. The benefit of averting maternal morbidity and mortality, child mortality, and unsafe abortions is an example to health policy makers to implement more family planning programmes and to offer contraceptive methods, mainly LARC and DMPA, at no cost or at affordable cost for the underprivileged population. This study received partial financial support from the Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP), grant # 2012/12810-4 and from the National Research Council (CNPq), grant #573747/2008-3. B.F.B., M.P.G., and V.M.C. were fellows from the scientific initiation programme from FAPESP. Since the year 2001, all the TCu380A IUD were donated by Injeflex, São Paulo, Brazil, and from the year 2006 all the LNG-IUS were donated by the International Contraceptive Access Foundation (ICA), Turku, Finland. Both donations are as unrestricted grants. The authors declare that there are no conflicts of interest associated with this study.
Resumo:
Patterns of vocal rehabilitation for 37 pharyngolaryngectomy patients and 55 total laryngectomy patients over a 5-year period were compared. An electrolarynx (EL) was introduced as the initial communication mode immediately after surgery for 98% of patients, with 30% of pharyngolaryngectomy and 74% of laryngectomy patients subsequently developing tracheoesophageal speech (TES) as their primary mode of communication. Follow-up with 14 of 37 pharyngolaryngectomy patients and 36 of 55 laryngectomy patients was conducted 1-6 years following surgery and revealed that 90% of the pharyngolaryngectomy patients maintained the use of TES in the long term compared to 69% of the laryngectomy group. Long-term outcomes relating to communication disability and handicap did not differ significantly between the two surgical groups, however the laryngectomy patients had significantly higher levels of wellbeing. Across the whole group of patients, statistical comparison revealed that patients using TES had significantly lower levels of disability, handicap and distress than EL users. Considering that lower levels of disability, handicap and distress are associated with TES, and the data supports that suitably selected patients can maintain functional TES in the long term, increased application of this form of communication rehabilitation should be encouraged where viable for the pharyngolaryngectomy population. Copyright (C) 2003 S. Karger AG, Basel.
Resumo:
BACKGROUND: Sequence data from resistance testing offer unique opportunities to characterize the structure of human immunodeficiency virus (HIV) infection epidemics. METHODS: We analyzed a representative set of HIV type 1 (HIV-1) subtype B pol sequences from 5700 patients enrolled in the Swiss HIV Cohort Study. We pooled these sequences with the same number of sequences from foreign epidemics, inferred a phylogeny, and identified Swiss transmission clusters as clades having a minimal size of 10 and containing >or=80% Swiss sequences. RESULTS: More than one-half of Swiss patients were included within 60 transmission clusters. Most transmission clusters were significantly dominated by specific transmission routes, which were used to identify the following patient groups: men having sex with men (MSM) (38 transmission clusters; average cluster size, 29 patients) or patients acquiring HIV through heterosexual contact (HETs) and injection drug users (IDUs) (12 transmission clusters; average cluster size, 144 patients). Interestingly, there were no transmission clusters dominated by sequences from HETs only. Although 44% of all HETs who were infected between 1983 and 1986 clustered with injection drug users, this percentage decreased to 18% for 2003-2006 (P<.001), indicating a diminishing role of injection drug users in transmission among HETs over time. CONCLUSIONS: Our analysis suggests (1) the absence of a self-sustaining epidemic of HIV-1 subtype B in HETs in Switzerland and (2) a temporally decreasing clustering of HIV infections in HETs and IDUs.
Resumo:
Land plants have had the reputation of being problematic for DNA barcoding for two general reasons: (i) the standard DNA regions used in algae, animals and fungi have exceedingly low levels of variability and (ii) the typically used land plant plastid phylogenetic markers (e.g. rbcL, trnL-F, etc.) appear to have too little variation. However, no one has assessed how well current phylogenetic resources might work in the context of identification (versus phylogeny reconstruction). In this paper, we make such an assessment, particularly with two of the markers commonly sequenced in land plant phylogenetic studies, plastid rbcL and internal transcribed spacers of the large subunits of nuclear ribosomal DNA (ITS), and find that both of these DNA regions perform well even though the data currently available in GenBank/EBI were not produced to be used as barcodes and BLAST searches are not an ideal tool for this purpose. These results bode well for the use of even more variable regions of plastid DNA (such as, for example, psbA-trnH) as barcodes, once they have been widely sequenced. In the short term, efforts to bring land plant barcoding up to the standards being used now in other organisms should make swift progress. There are two categories of DNA barcode users, scientists in fields other than taxonomy and taxonomists. For the former, the use of mitochondrial and plastid DNA, the two most easily assessed genomes, is at least in the short term a useful tool that permits them to get on with their studies, which depend on knowing roughly which species or species groups they are dealing with, but these same DNA regions have important drawbacks for use in taxonomic studies (i.e. studies designed to elucidate species limits). For these purposes, DNA markers from uniparentally (usually maternally) inherited genomes can only provide half of the story required to improve taxonomic standards being used in DNA barcoding. In the long term, we will need to develop more sophisticated barcoding tools, which would be multiple, low-copy nuclear markers with sufficient genetic variability and PCR-reliability; these would permit the detection of hybrids and permit researchers to identify the 'genetic gaps' that are useful in assessing species limits.
Resumo:
OBJECTIVES: Non-steroidal anti-inflammatory drugs (NSAIDs) may cause kidney damage. This study assessed the impact of prolonged NSAID exposure on renal function in a large rheumatoid arthritis (RA) patient cohort. METHODS: Renal function was prospectively followed between 1996 and 2007 in 4101 RA patients with multilevel mixed models for longitudinal data over a mean period of 3.2 years. Among the 2739 'NSAID users' were 1290 patients treated with cyclooxygenase type 2 selective NSAIDs, while 1362 subjects were 'NSAID naive'. Primary outcome was the estimated glomerular filtration rate according to the Cockroft-Gault formula (eGFRCG), and secondary the Modification of Diet in Renal Disease and Chronic Kidney Disease Epidemiology Collaboration formula equations and serum creatinine concentrations. In sensitivity analyses, NSAID dosing effects were compared for patients with NSAID registration in ≤/>50%, ≤/>80% or ≤/>90% of assessments. FINDINGS: In patients with baseline eGFRCG >30 mL/min, eGFRCG evolved without significant differences over time between 'NSAID users' (mean change in eGFRCG -0.87 mL/min/year, 95% CI -1.15 to -0.59) and 'NSAID naive' (-0.67 mL/min/year, 95% CI -1.26 to -0.09, p=0.63). In a multivariate Cox regression analysis adjusted for significant confounders age, sex, body mass index, arterial hypertension, heart disease and for other insignificant factors, NSAIDs were an independent predictor for accelerated renal function decline only in patients with advanced baseline renal impairment (eGFRCG <30 mL/min). Analyses with secondary outcomes and sensitivity analyses confirmed these results. CONCLUSIONS: NSAIDs had no negative impact on renal function estimates but in patients with advanced renal impairment.
Resumo:
The evaluation of long-term care (LTC) systems carried out in Work Package 7 of the ANCIEN project shows which performance criteria are important and – based on the available information – how European countries score on those criteria. This paper summarises the results and discusses the policy implications. An overall evaluation was carried out for four representative countries: Germany, the Netherlands, Spain and Poland. Of the four countries, the Dutch system has the highest scores on quality of life of LTC users, quality of care and equity of the LTC system, and it performs the secondbest after Poland in terms of the total burden of care (consisting of the financial burden and the burden of informal caregiving). The German system has somewhat lower scores than the Dutch on all four dimensions. The Polish system excels in having a low total burden of care, but it scores the lowest on quality of care and equity. The Spanish system has few extreme scores. Some important lessons are the following. The performance of a LTC system is a complex concept where many dimensions have to be included. Specifically, the impact of informal caregiving on the caregivers and on society should not be forgotten. The role of the state in funding and organising LTC versus individual responsibilities is one of the most important differences among countries. Choices concerning private funding and the role of informal care have a large effect not only on the public expenditures but also on the fairness of the system. International research into the relative preferences for the different performance criteria could produce a sound basis for the weights used in the overall evaluation.
Resumo:
Land plants have had the reputation of being problematic for DNA barcoding for two general reasons: (i) the standard DNA regions used in algae, animals and fungi have exceedingly low levels of variability and (ii) the typically used land plant plastid phylogenetic markers (e.g. rbcL, trnL-F, etc.) appear to have too little variation. However, no one has assessed how well current phylogenetic resources might work in the context of identification (versus phylogeny reconstruction). In this paper, we make such an assessment, particularly with two of the markers commonly sequenced in land plant phylogenetic studies, plastid rbcL and internal transcribed spacers of the large subunits of nuclear ribosomal DNA (ITS), and find that both of these DNA regions perform well even though the data currently available in GenBank/EBI were not produced to be used as barcodes and BLAST searches are not an ideal tool for this purpose. These results bode well for the use of even more variable regions of plastid DNA (such as, for example, psbA-trnH) as barcodes, once they have been widely sequenced. In the short term, efforts to bring land plant barcoding up to the standards being used now in other organisms should make swift progress. There are two categories of DNA barcode users, scientists in fields other than taxonomy and taxonomists. For the former, the use of mitochondrial and plastid DNA, the two most easily assessed genomes, is at least in the short term a useful tool that permits them to get on with their studies, which depend on knowing roughly which species or species groups they are dealing with, but these same DNA regions have important drawbacks for use in taxonomic studies (i.e. studies designed to elucidate species limits). For these purposes, DNA markers from uniparentally (usually maternally) inherited genomes can only provide half of the story required to improve taxonomic standards being used in DNA barcoding. In the long term, we will need to develop more sophisticated barcoding tools, which would be multiple, low-copy nuclear markers with sufficient genetic variability and PCR-reliability; these would permit the detection of hybrids and permit researchers to identify the 'genetic gaps' that are useful in assessing species limits.
Resumo:
Assessing the efficacy of implantable cardioverter-defibrillators (ICD) in patients with Chagas' heart disease (ChHD) and identifying the clinical predictors of mortality and ICD shock during long-term follow-up. ChHD is associated with ventricular tachyarrhythmias and an increased risk of sudden cardiac death. Although ChHD is a common form of cardiomyopathy in Latin American ICD users, little is known about its efficacy in the treatment of this population. The study cohort included 116 consecutive patients with ChHD and an ICD implanted for secondary prevention. Of the 116 patients, 83 (72%) were men; the mean age was 54 +/- 10.7 years. Several clinical variables were tested in a multivariate Cox model for predicting long-term mortality. The average follow-up was 45 +/- 32 months. New York Heart Association class I-II developed in 83% of patients. The mean left ventricular ejection fraction was 42 +/- 16% at implantation. Of the 116 patients, 58 (50%) had appropriate shocks and 13 (11%) had inappropriate therapy. A total of 31 patients died (7.1% annual mortality rate). New York Heart Association class III (hazard ratio [HR] 3.09, 95% confidence interval 1.37 to 6.96, p = 0.0064) was a predictor of a worse prognosis. The left ventricular ejection fraction (HR 0.972, 95% confidence interval 0.94 to 0.99, p = 0.0442) and low cumulative right ventricular pacing (HR 0.23, 95% confidence interval 0.11 to 0.49, p = 0.0001) were predictors of better survival. The left ventricular diastolic diameter was an independent predictor of appropriate shock (I-ER 1.032, 95% confidence interval 1.004 to 1.060, p = 0.025). In conclusion, in a long-term follow-up, ICD efficacy for secondary sudden cardiac death prevention in patients with ChHD was marked by a favorable annual rate of all-cause mortality (7.1%); 50% of the cohort received appropriate shock therapy. New York Heart Association class III and left ventricular ejection fraction were independent predictors of worse prognosis, and low cumulative right ventricular pacing defined better survival. (C) 2012 Elsevier Inc. All rights reserved. (Am J Cardiol 2012;110:1040-1045)