820 resultados para patient reported outcome measures
Resumo:
OBJECTIVE AND DESIGN A systematic review of all literature was done to assess the ability of the progestin dienogest (DNG) to influence the inflammatory response of endometriotic cells. MAIN OUTCOME MEASURES In vitro and in vivo studies report an influence of DNG on the inflammatory response in eutopic or ectopic endometrial tissue (animal or human). RESULTS After strict inclusion criteria were satisfied, 15 studies were identified that reported a DNG influence on the inflammatory response in endometrial tissue. These studies identified a modulation of prostaglandin (PG) production and metabolism (PGE2, PGE2 synthase, cyclo-oxygenase-2 and microsomal PGE synthase-1), pro-inflammatory cytokine and chemokine production [interleukin (IL)-1β, IL-6, IL-8, tumor necrosis factor-α, monocyte chemoattractant protein-1 and stromal cell-derived factor-1], growth factor biosynthesis (vascular endothelial growth factor and nerve growth factor) and signaling kinases, responsible for the control of inflammation. Evidence supports a progesterone receptor-mediated inhibition of the inflammatory response in PR-expressing epithelial cells. It also indicated that DNG inhibited the inflammatory response in stromal cells, however, whether this was via a PR-mediated mechanism is not clear. CONCLUSIONS DNG has a significant effect on the inflammatory microenvironment of endometriotic lesions that may contribute to its clinical efficacy. A better understanding of the specific anti-inflammatory activity of DNG and whether this contributes to its clinical efficacy can help develop treatments that focus on the inhibition of inflammation while minimizing hormonal modulation.
Resumo:
BACKGROUND Atypical meningiomas are an intermediate grade brain tumour with a recurrence rate of 39-58 %. It is not known whether early adjuvant radiotherapy reduces the risk of tumour recurrence and whether the potential side-effects are justified. An alternative management strategy is to perform active monitoring with magnetic resonance imaging (MRI) and to treat at recurrence. There are no randomised controlled trials comparing these two approaches. METHODS/DESIGN A total of 190 patients will be recruited from neurosurgical/neuro-oncology centres across the United Kingdom, Ireland and mainland Europe. Adult patients undergoing gross total resection of intracranial atypical meningioma are eligible. Patients with multiple meningioma, optic nerve sheath meningioma, previous intracranial tumour, previous cranial radiotherapy and neurofibromatosis will be excluded. Informed consent will be obtained from patients. This is a two-stage trial (both stages will run in parallel): Stage 1 (qualitative study) is designed to maximise patient and clinician acceptability, thereby optimising recruitment and retention. Patients wishing to continue will proceed to randomisation. Stage 2 (randomisation) patients will be randomised to receive either early adjuvant radiotherapy for 6 weeks (60 Gy in 30 fractions) or active monitoring. The primary outcome measure is time to MRI evidence of tumour recurrence (progression-free survival (PFS)). Secondary outcome measures include assessing the toxicity of the radiotherapy, the quality of life, neurocognitive function, time to second line treatment, time to death (overall survival (OS)) and incremental cost per quality-adjusted life year (QALY) gained. DISCUSSION ROAM/EORTC-1308 is the first multi-centre randomised controlled trial designed to determine whether early adjuvant radiotherapy reduces the risk of tumour recurrence following complete surgical resection of atypical meningioma. The results of this study will be used to inform current neurosurgery and neuro-oncology practice worldwide. TRIAL REGISTRATION ISRCTN71502099 on 19 May 2014.
Resumo:
Background Mindfulness has its origins in an Eastern Buddhist tradition that is over 2500 years old and can be defined as a specific form of attention that is non-judgmental, purposeful, and focused on the present moment. It has been well established in cognitive-behavior therapy in the last decades, while it has been investigated in manualized group settings such as mindfulness-based stress reduction and mindfulness-based cognitive therapy. However, there is scarce research evidence on the effects of mindfulness as a treatment element in individual therapy. Consequently, the demand to investigate mindfulness under effectiveness conditions in trainee therapists has been highlighted. Methods/Design To fill in this research gap, we designed the PrOMET Study. In our study, we will investigate the effects of brief, audiotape-presented, session-introducing interventions with mindfulness elements conducted by trainee therapists and their patients at the beginning of individual therapy sessions in a prospective, randomized, controlled design under naturalistic conditions with a total of 30 trainee therapists and 150 patients with depression and anxiety disorders in a large outpatient training center. We hypothesize that the primary outcomes of the session-introducing intervention with mindfulness elements will be positive effects on therapeutic alliance (Working Alliance Inventory) and general clinical symptomatology (Brief Symptom Checklist) in contrast to the session-introducing progressive muscle relaxation and treatment-as-usual control conditions. Treatment duration is 25 therapy sessions. Therapeutic alliance will be assessed on a session-to-session basis. Clinical symptomatology will be assessed at baseline, session 5, 15 and 25. We will conduct multilevel modeling to address the nested data structure. The secondary outcome measures include depression, anxiety, interpersonal functioning, mindful awareness, and mindfulness during the sessions. Discussion The study results could provide important practical implications because they could inform ideas on how to improve the clinical training of psychotherapists that could be implemented very easily; this is because there is no need for complex infrastructures or additional time concerning these brief session-introducing interventions with mindfulness elements that are directly implemented in the treatment sessions.
Resumo:
Objective: To assess the neuropsychological outcome as a safety measure and quality control in patients with subthalamic nucleus (STN) stimulation for PD. Background: Deep brain stimulation (DBS) is considered a relatively safe treatment used in patients with movement disorders. However, neuropsychological alterations have been reported in patients with STN DBS for PD. Cognition and mood are important determinants of quality of life in PD patients and must be assessed for safety control. Methods: Seventeen consecutive patients (8 women) who underwent STN DBS for PD have been assessed before and 4 months after surgery. Besides motor symptoms (UPDRS-III), mood (Beck Depression Inventory, Hamilton Depression Rating Scale) and neuropsychological aspects, mainly executive functions, have been assessed (mini mental state examination, semantic and phonematic verbal fluency, go-no go test, stroop test, trail making test, tests of alertness and attention, digit span, wordlist learning, praxia, Boston naming test, figure drawing, visual perception). Paired t-tests were used for comparisons before and after surgery. Results: Patients were 61.6±7.8 years old at baseline assessment. All surgeries were performed without major adverse events. Motor symptoms ‘‘on’’ medication remained stable whereas they improved in the ‘‘off’’ condition (p<0.001). Mood was not depressed before surgery and remained unchanged at follow-up. All neuropsychological assessment outcome measures remained stable at follow-up with the exception of semantic verbal fluency and wordlist learning. Semantic verbal fluency decreased by 21±16% (p<0.001) and there was a trend to worse phonematic verbal fluency after surgery (p=0.06). Recall of a list of 10 words was worse after surgery only for the third attempt of recall (13%, p<0.005). Conclusions: Verbal fluency decreased in our patients after STN DBS, as previously reported. The procedure was otherwise safe and did not lead to deterioration of mood.
Resumo:
INTRODUCTION Surgical decompression for lumbar spinal stenosis (LSS) has been associated with poorer outcomes in patients with pronounced low back pain (LBP) as compared to patients with predominant leg pain. This cross registry study assessed potential benefits of the interlaminar coflex® device as an add-on to bony decompression alone. METHODS Patients with lumbar decompression plus coflex® (SWISSspine registry) were compared with decompressed controls (Spine Tango registry). Inclusion criteria were LSS and a preoperative back pain level of ≥5 points. 1:1 propensity score-based matching was performed. Outcome measures were back and leg pain relief, COMI score improvement, patient satisfaction, complication, and revision rates. RESULTS 50 matched pairs without residual significant differences but age were created. At the 7-9 months follow-up interval the coflex® group had higher back (p=0.014) and leg pain relief (p<0.001) and COMI score improvement (p=0.029) than the decompression group. Patient satisfaction was 90% in both groups. No revision was documented in the coflex® and one in the decompression group (2.0%). DISCUSSION In the short-term, lumbar decompression with coflex® compared with decompression alone in patients with LSS and pronounced LBP at baseline is a safe and effective treatment option that appears beneficial regarding clinical and functional outcomes. However, residual confounding of non-measured covariables may have partially influenced our findings. Also, despite careful inclusion and exclusion of cases the cross registry approach introduces a potential for selection bias that we could not totally control for and that makes additional studies necessary.
Resumo:
Specific aims. This study estimated the accuracy of alternative numerator methods for attributing health care utilization and associated costs to diabetes by comparing findings from those methods with findings from a benchmark denominator method. ^ Methods. Using Medicare's 1995 inpatient and enrollment databases for the elderly in Texas, the researcher developed alternative estimates of costs attributable to diabetes. Among alternative numerator methods were selection of all records having diabetes as a principal or secondary diagnosis, and a complex ICD-9-CM sorting routine as previously developed for study of diabetes costs in Texas. Findings from numerator methods were compared with those from a benchmark denominator method based on attributable risk and adapted from a study of national diabetes costs by the American Diabetes Association. This study applied age, gender and ethnicity specific estimates of diabetes prevalence taken from the 1987–94 National Health Interview Surveys to person-months of Medicare Part A, non-HMO enrollment for Texas in 1995. Outcome measures were number of persons identified as having diabetes using alternative definitions of the disease; and number of hospital stays, patient days, and costs using alternative methods for attributing care and costs to diabetes. Cost estimates were based on Medicare payments plus deductibles, co-pays and third party payments. ^ Findings. Numerator methods for attributing costs to diabetes produced findings quite different than those from the benchmark denominator method. When attribution was based on diabetes as principal or secondary diagnosis, the resulting estimates were significantly higher than those obtained from the denominator method. The more complex sorting routine produced estimates near the lower boundary for the confidence interval associated with estimates from the benchmark method. ^ Conclusions. Numerator methods employed by previous researchers poorly estimate the costs of diabetes. While crude mathematical adjustment can be made to the respective numerator approaches, a more useful strategy would be to refine the complex sorting routine to include more hospitalizations. This report recommends approaches to improving methods previously employed in study of diabetes costs. ^
Resumo:
Context: Despite tremendous strides in HIV treatment over the past decade, resistance remains a major problem. A growing number of patients develop resistance and require new therapies to suppress viral replication. ^ Objective: To assess the safety of multiple administrations of the anti-CD4 receptor (anti-CD4) monoclonal antibody ibalizumab given as intravenous (IV) infusions, in three dosage regimens, in subjects infected with human immunodeficiency virus (HIV-1). ^ Design: Phase 1, multi-center, open-label, randomized clinical trial comparing the safety, pharmacokinetics and antiviral activity of three dosages of ibalizumab. ^ Setting: Six clinical trial sites in the United States. ^ Participants: A total of twenty-two HIV-positive patients on no anti-retroviral therapy or a stable failing regimen. ^ Intervention: Randomized to one of two treatment groups in Arms A and B followed by non-randomized enrollment in Arm C. Patients randomized to Arm A received 10 mg/kg of ibalizumab every 7 days, for a total of 10 doses; patients randomized to Arm B received a total of six doses of ibalizumab; a single loading dose of 10 mg/kg on Day 1 followed by five maintenance doses of 6 mg/kg every 14 days, starting at Week 1. Patients assigned to Arm C received 25 mg/kg of ibalizumab every 14 days for a total of 5 doses. All patients were followed for safety for an additional 7 to 8 weeks. ^ Main Outcome Measures: Clinical and laboratory assessments of safety and tolerability of multiple administrations of ibalizumab in HIV-infected patients. Secondary measures of efficacy include HIV-1 RNA (viral load) measurements. ^ Results: 21 patients were treatment-experienced and 1 was naïve to HIV therapy. Six patients were failing despite therapy and 15 were on no current HIV treatment. Mean baseline viral load (4.78 log 10; range 3.7-5.9) and CD4+ cell counts (332/μL; range 89-494) were similar across cohorts. Mean peak decreases in viral load from baseline of 0.99 log10(1.11 log10, and 0.96 log 10 occurred by Wk 2 in Cohorts A, B and C, respectively. Viral loads decreased by >1.0 log10 in 64%; 4 patients viral loads were suppressed to < 400 copies/mL. Viral loads returned towards baseline by Week 9 with reduced susceptibility to ibalizumab. CD4+ cell counts rose transiently and returned toward baseline. Maximum median elevations above BL in CD4+ cell counts for Cohorts A, B and C were +257, +198 and +103 cells/μL, respectively and occurred within 3 Wks in 16 of 22 subjects. The half-life of ibalizumab was 3-3.5 days and elimination was characteristic of capacity-limited kinetics. Administration of ibalizumab was well tolerated. Four serious adverse events were reported during the study. None of these events were related to study drug. Headache, nausea and cough were the most frequently reported treatment emergent adverse events and there were no laboratory abnormalities related to study drug. ^ Conclusions: Ibalizumab administered either weekly or bi-weekly was safe, well tolerated, and demonstrated antiviral activity. Further studies with ibalizumab in combination with standard antiretroviral treatments are warranted.^
Resumo:
Ordinal outcomes are frequently employed in diagnosis and clinical trials. Clinical trials of Alzheimer's disease (AD) treatments are a case in point using the status of mild, moderate or severe disease as outcome measures. As in many other outcome oriented studies, the disease status may be misclassified. This study estimates the extent of misclassification in an ordinal outcome such as disease status. Also, this study estimates the extent of misclassification of a predictor variable such as genotype status. An ordinal logistic regression model is commonly used to model the relationship between disease status, the effect of treatment, and other predictive factors. A simulation study was done. First, data based on a set of hypothetical parameters and hypothetical rates of misclassification was created. Next, the maximum likelihood method was employed to generate likelihood equations accounting for misclassification. The Nelder-Mead Simplex method was used to solve for the misclassification and model parameters. Finally, this method was applied to an AD dataset to detect the amount of misclassification present. The estimates of the ordinal regression model parameters were close to the hypothetical parameters. β1 was hypothesized at 0.50 and the mean estimate was 0.488, β2 was hypothesized at 0.04 and the mean of the estimates was 0.04. Although the estimates for the rates of misclassification of X1 were not as close as β1 and β2, they validate this method. X 1 0-1 misclassification was hypothesized as 2.98% and the mean of the simulated estimates was 1.54% and, in the best case, the misclassification of k from high to medium was hypothesized at 4.87% and had a sample mean of 3.62%. In the AD dataset, the estimate for the odds ratio of X 1 of having both copies of the APOE 4 allele changed from an estimate of 1.377 to an estimate 1.418, demonstrating that the estimates of the odds ratio changed when the analysis includes adjustment for misclassification. ^
Resumo:
Background. Cardiovascular disease (CVD) exhibits the most striking public health significance due to its high prevalence and mortality as well as huge economic burdens all over the world, especially in industrialized countries. Major risk factors of CVDs have been the targets of population-wide prevention in the United States. Economic evaluations provide structured information in regard to the efficiency of resource utilization which can inform decisions of resource allocation. The main purpose of this review is to investigate the pattern of study design of economic evaluations for interventions of CVDs. ^ Methods. Primary journal articles published during 2003-2008 were systematically retrieved via relevant keywords from Medline, NHS Economic Evaluation Database (NHS EED) and EBSCO Academic Search Complete. Only full economic evaluations for narrowly defined CVD interventions were included for this review. The methodological data of interest were extracted from the eligible articles and reorganized in Microsoft Access database. Chi-square tests in SPSS were used to analyze the associations between pairs of categorical data. ^ Results. One hundred and twenty eligible articles were reviewed after two steps of literature selection with explicit inclusion and exclusion criteria. Descriptive statistics were reported regarding the evaluated interventions, outcome measures, unit costing and cost reports. The chi-square test of the association between prevention level of intervention and category of time horizon showed no statistical significance. The chi-square test showed that sponsor type was significantly associated with whether new or standard intervention being concluded as more cost effective. ^ Conclusions. Tertiary prevention and medication interventions are the major interests for economic evaluators. The majority of the evaluations were claimed from either a provider’s or a payer’s perspective. Almost all evaluations adopted gross costing strategy for unit cost data rather than micro costing. EQ-5D is the most commonly used instrument for subjective outcome measurement. More than half of the evaluations used decision analytic modeling techniques. The lack of consistency in study design standards in published evaluations appears in several aspects. Prevention level of intervention is not likely to be a factor for evaluators to decide whether to design an evaluation in a lifetime horizon or not. Published evaluations sponsored by industry are more likely to conclude that new intervention is more cost effective than standard intervention.^
Resumo:
The objectives of this study were to compare female child-care providers with female university workers and with mothers of children in child-care centers for: (1) frequency of illness and work loss days due to infectious diseases, (2) prevalence of antibodies against measles, rubella, mumps, hepatitis B, hepatitis A, chickenpox and cytomegalovirus (CMV), and (3) status regarding health insurance and job benefits.^ Subjects from twenty child-care centers and twenty randomly selected departments of a university in Houston, Texas were studied in a cross-sectional fashion.^ A cluster sample of 281 female child-care providers from randomly selected child-care centers, a cluster sample of 286 university workers from randomly selected departments and a systematic sample of 198 mothers of children from randomly selected child-care centers.^ Main outcome measures were: (1) self-reported frequency of infectious diseases and number of work-days lost due to infectious diseases; (2) presence of antibodies in blood; and (3) self-reported health insurance and job benefits.^ In comparison to university workers, child-care providers reported a higher prevalence of infectious diseases in the past 30 days; lost three times more work-days due to infectious diseases; and were more likely to have anti-core antibodies against hepatitis B (odds ratio = 3.16 95% CI 1.27-7.85) and rubella (OR 1.88, 95% CI 1.02-3.45). Child-care providers had less health insurance and job-related benefits than mothers of children attending child-care centers.^ Regulations designed to reduce transmission of vaccine and non-vaccine preventable diseases in child-care centers should be strictly enforced. In addition policies to improve health insurance and job benefits of child-care providers are urgently needed. ^
Resumo:
Next to leisure, sport, and household activities, the most common activity resulting in medically consulted injuries and poisonings in the United States is work, with an estimated 4 million workplace related episodes reported in 2008 (U.S. Department of Health and Human Services, 2009). To address the risks inherent to various occupations, risk management programs are typically put in place that include worker training, engineering controls, and personal protective equipment. Recent studies have shown that such interventions alone are insufficient to adequately manage workplace risks, and that the climate in which the workers and safety program exist (known as the "safety climate") is an equally important consideration. The organizational safety climate is so important that many studies have focused on developing means of measuring it in various work settings. While safety climate studies have been reported for several industrial settings, published studies on assessing safety climate in the university work setting are largely absent. Universities are particularly unique workplaces because of the potential exposure to a diversity of agents representing both acute and chronic risks. Universities are also unique because readily detectable health and safety outcomes are relatively rare. The ability to measure safety climate in a work setting with rarely observed systemic outcome measures could serve as a powerful means of measure for the evaluation of safety risk management programs. ^ The goal of this research study was the development of a survey tool to measure safety climate specifically in the university work setting. The use of a standardized tool also allows for comparisons among universities throughout the United States. A specific study objective was accomplished to quantitatively assess safety climate at five universities across the United States. At five universities, 971 participants completed an online questionnaire to measure the safety climate. The average safety climate score across the five universities was 3.92 on a scale of 1 to 5, with 5 indicating very high perceptions of safety at these universities. The two lowest overall dimensions of university safety climate were "acknowledgement of safety performance" and "department and supervisor's safety commitment". The results underscore how the perception of safety climate is significantly influenced at the local level. A second study objective regarding evaluating the reliability and validity of the safety climate questionnaire was accomplished. A third objective fulfilled was to provide executive summaries resulting from the questionnaire to the participating universities' health & safety professionals and collect feedback on usefulness, relevance and perceived accuracy. Overall, the professionals found the survey and results to be very useful, relevant and accurate. Finally, the safety climate questionnaire will be offered to other universities for benchmarking purposes at the annual meeting of a nationally recognized university health and safety organization. The ultimate goal of the project was accomplished and was the creation of a standardized tool that can be used for measuring safety climate in the university work setting and can facilitate meaningful comparisons amongst institutions.^
Resumo:
This study examines the role of socially desirable responding (SDR) on smoking cessation program success. SDR is the tendency for individuals to give responses that put themselves in what they perceive to be a socially desirable light. ^ This research is a secondary analysis of data from Project Cognition, a study designed to examine the associations between performance on cognitive assessments and subsequent relapse to smoking. Adult smokers (N=183) were recruited from the greater Houston area to participate in the smoking cessation study. In this portion of the research, participants' smoking status was assessed on their quit day (QD), one week after QD, and four weeks after QD. Primary outcome measures were self-reported relapse, true cessation determined by biological measure, discrepancies between self-reported smoking status and biological assessments of smoking, and dropping out. ^ Primary predictor measures were the Balanced Inventory of Desirable Responding (BIDR) and self-reported motivation to quit smoking. The BIDR is a 40-item questionnaire that assesses Self-deceptive Enhancement (SDE; the tendency to give self-reports that are honest but positively biased) and Impression Management (IM; deliberate self-presentation to an audience). Scores were used to create a dichotomous BIDR total score group variable, a dichotomous SDE group variable, and a dichotomous IM group variable. Participants at one standard deviation above the mean were in the "high" group, and scores below one standard deviation were in the "normal" group. In addition, age, race, and gender were analyzed as covariates. ^ The overall findings of this study suggest that in the general population BIDR informs participants' self-reports and the IM and SDR subscales inform participants' behavior. BIDR predicted self-reported relapse in the general population and trended toward indicating that a participant will claim smoking cessation success when biological measures indicate otherwise. SDE interacted with motivation to predict biologically verified cessation success. There was no main effect for BIDR, IM, or SDE predicting drop out; however, IM interacted with age to predict participants' likelihood of drop out. Used in conjunction, the BIDR, IM subscale, and SDR subscale can be used to more accurately tailor smoking cessation programs to the needs of individual participants.^
Resumo:
Objective: To evaluate the visual and refractive outcomes after phacoemulsification surgery in eyes with isolated lens coloboma. Design: Prospective, consecutive case series. Participants: Eighteen eyes with isolated lens coloboma of 13 patients were included in the study. Mean patient age was 13.9 ± 6.5 years. Methods: Patients underwent phacoemulsification surgery, with combined implantation of capsular tension ring (CTR) and intraocular lens. In colobomas of less than 120°, a CTR was used, whereas in colobomas of more than 120°, a Cionni-modified single eyelet CTR was used to achieve better capsular centration. The main outcome measures were uncorrected distance visual acuity, corrected distance visual acuity, refraction, and keratometry. Results: Mean logMAR uncorrected distance visual acuity and corrected distance visual acuity improved significantly from 1.53 ± 0.35 and 1.02 ± 0.47 before surgery to 0.67 ± 0.51 and 0.52 ± 0.49 at the last visit of the follow-up (p < 0.001). Mean refractive cylinder and spherical equivalent decreased significantly from –6.73 ± 1.73 and –6.72 ± 4.07 D preoperatively to –1.40 ± 1.39 and –0.83 ± 1.31 D at the end of the follow-up (p = 0.001 and p = 0.01, respectively). Mean keratometric astigmatism at preoperative and postoperative visits were 1.58 ± 0.97 and 1.65 ± 0.94 D, respectively (p = 0.70). Conclusions: Phacoemulsification with CTR and intraocular lens implantation is an effective and safe option for providing a refractive correction and a significant visual improvement in eyes with isolated lens coloboma.
Resumo:
Introduction : Les adolescents avec déficiences physiques en transition vers la vie adulte éprouvent des difficultés à établir une participation sociale optimale. Cette étude explore les retombées d'un programme de cirque social sur la participation sociale de ces jeunes selon leur point de vue et celui de leurs parents. Méthode : Étude qualitative exploratoire d’orientation phénoménologique. Neuf personnes avec déficiences physiques, âgées de 18 et 25 ans, ont participé au programme pendant neuf mois. Données recueillies : perceptions de leur qualité de participation sociale à partir d’entrevues semi-structurées en pré, mi-temps et post-intervention avec les participants et un de leurs parents. Le guide d’entrevue validé est ancré sur le Modèle du développement humain- Processus de production du handicap - 2 (HDM-PPH2). L’enregistrement audio des entretiens a été transcrit en verbatim. Le contenu a été analysé avec le logiciel Nvivo 9 à travers une grille de codage préalablement validée (co-codage, codage-inverse). Résultats : Corpus de 54 entrevues. L’âge moyen des jeunes était de 20,0 ± 1,4 années et de 51 ± 3,6 années pour les parents. Selon tous, la participation sociale des jeunes adultes a été optimisée, surtout sur le plan de la communication, des déplacements, des relations interpersonnelles, des responsabilités et de la vie communautaire. La perception de soi et les habiletés sociales, également améliorées, ont favorisé une plus grande auto-efficacité. Conclusion : Cette étude soutient donc le potentiel du cirque social comme approche novatrice et probante en réadaptation physique pour cette population, et appuie la pertinence d’autres études rigoureuses mesurant les diverses retombées possibles et identifiées.
Resumo:
Objective: To identify factors influencing the prescribing of medicines by general practitioners in rural and remote Australia. Design: A qualitative study using a questionnaire to determine attitudes about prescribing, specific prescribing habits and comments on prescribing in ‘rural practice’. Setting: General practice in rural and remote Queensland. Subjects: General practitioners practising in rural and remote settings in Queensland (n = 258). Main outcome measures: The factors perceived to influence the prescribing of medicines by medical practitioners in rural environments. Results: A 58% response rate (n = 142) was achieved. Most respondents agreed that they prescribe differently in rural compared with city practice. The majority of respondents agreed that their prescribing was influenced by practice location, isolation of patient home location, limited diagnostic testing and increased drug monitoring. Location issues and other issues were more likely to be identified as ‘influential’ by the more isolated practitioners. Factors such as access to continuing medical education and specialists were confirmed as having an influence on prescribing. The prescribing of recently marketed drugs was more likely by doctors practising in less remote rural areas. Conclusion: Practising in rural and remote locations is perceived to have an effect on prescribing. These influences need to be considered when developing quality use of medicines policies and initiatives for these locations. What is already known: Anecdotal and audit based studies have shown that rural general practice differs to urban-based practice in Australia, including some limited data showing some variations in prescribing patterns. No substantiated explanations for these variations have been offered. It is known that interventions to change prescribing behaviour are more likely to be effective if they are perceived as relevant and hence increasing our knowledge of rural doctors’ perceptions of differences in rural practice prescribing is required. What this study adds: Rural doctors believed that they prescribe differently in rural compared with city practice and they described a range of influences. The more remotely located doctors were more likely to report the ‘rural’ influences on prescribing, however, most results failed to reach statistical significance when compared to the less remotely located doctors. These perceptions should be considered when developing medicines policy and education for rural medical practitioners to ensure it is perceived rurally relevant.