742 resultados para Patient-reported outcome measures
Resumo:
Background: Achieving health equity has been identified as a major challenge, both internationally and within Australia. Inequalities in cancer outcomes are well documented, and must be quantified before they can be addressed. One method of portraying geographical variation in data uses maps. Recently we have produced thematic maps showing the geographical variation in cancer incidence and survival across Queensland, Australia. This article documents the decisions and rationale used in producing these maps, with the aim to assist others in producing chronic disease atlases. Methods: Bayesian hierarchical models were used to produce the estimates. Justification for the cancers chosen, geographical areas used, modelling method, outcome measures mapped, production of the adjacency matrix, assessment of convergence, sensitivity analyses performed and determination of significant geographical variation is provided. Conclusions: Although careful consideration of many issues is required, chronic disease atlases are a useful tool for assessing and quantifying geographical inequalities. In addition they help focus research efforts to investigate why the observed inequalities exist, which in turn inform advocacy, policy, support and education programs designed to reduce these inequalities.
Resumo:
Objective: To compare the location and accessibility of current Australian chronic heart failure (CHF) management programs and general practice services with the probable distribution of the population with CHF. Design and setting: Data on the prevalence and distribution of the CHF population throughout Australia, and the locations of CHF management programs and general practice services from 1 January 2004 to 31 December 2005 were analysed using geographic information systems (GIS) technology. Outcome measures: Distance of populations with CHF to CHF management programs and general practice services. Results: The highest prevalence of CHF (20.3–79.8 per 1000 population) occurred in areas with high concentrations of people over 65 years of age and in areas with higher proportions of Indigenous people. Five thousand CHF patients (8%) discharged from hospital in 2004–2005 were managed in one of the 62 identified CHF management programs. There were no CHF management programs in the Northern Territory or Tasmania. Only four CHF management programs were located outside major cities, with a total case load of 80 patients (0.7%). The mean distance from any Australian population centre to the nearest CHF management program was 332 km (median, 163 km; range, 0.15–3246 km). In rural areas, where the burden of CHF management falls upon general practitioners, the mean distance to general practice services was 37 km (median, 20 km; range, 0–656 km). Conclusion: There is an inequity in the provision of CHF management programs to rural Australians.
Resumo:
Background Oxidative stress plays a role in acute and chronic inflammatory disease and antioxidant supplementation has demonstrated beneficial effects in the treatment of these conditions. This study was designed to determine the optimal dose of an antioxidant supplement in healthy volunteers to inform a Phase 3 clinical trial. Methods The study was designed as a combined Phase 1 and 2 open label, forced titration dose response study in healthy volunteers (n = 21) to determine both acute safety and efficacy. Participants received a dietary supplement in a forced titration over five weeks commencing with a no treatment baseline through 1, 2, 4 and 8 capsules. The primary outcome measurement was ex vivo changes in serum oxygen radical absorbance capacity (ORAC). The secondary outcome measures were undertaken as an exploratory investigation of immune function. Results A significant increase in antioxidant activity (serum ORAC) was observed between baseline (no capsules) and the highest dose of 8 capsules per day (p = 0.040) representing a change of 36.6%. A quadratic function for dose levels was fitted in order to estimate a dose response curve for estimating the optimal dose. The quadratic component of the curve was significant (p = 0.047), with predicted serum ORAC scores increasing from the zero dose to a maximum at a predicted dose of 4.7 capsules per day and decreasing for higher doses. Among the secondary outcome measures, a significant dose effect was observed on phagocytosis of granulocytes, and a significant increase was also observed on Cox 2 expression. Conclusion This study suggests that Ambrotose AO® capsules appear to be safe and most effective at a dosage of 4 capsules/day. It is important that this study is not over interpreted; it aimed to find an optimal dose to assess the dietary supplement using a more rigorous clinical trial design. The study achieved this aim and demonstrated that the dietary supplement has the potential to increase antioxidant activity. The most significant limitation of this study was that it was open label Phase 1/Phase 2 trial and is subject to potential bias that is reduced with the use of randomization and blinding. To confirm the benefits of this dietary supplement these effects now need to be demonstrated in a Phase 3 randomised controlled trial (RCT).
Resumo:
Objective: To determine whether primary care management of chronic heart failure (CHF) differed between rural and urban areas in Australia. Design: A cross-sectional survey stratified by Rural, Remote and Metropolitan Areas (RRMA) classification. The primary source of data was the Cardiac Awareness Survey and Evaluation (CASE) study. Setting: Secondary analysis of data obtained from 341 Australian general practitioners and 23 845 adults aged 60 years or more in 1998. Main outcome measures: CHF determined by criteria recommended by the World Health Organization, diagnostic practices, use of pharmacotherapy, and CHF-related hospital admissions in the 12 months before the study. Results: There was a significantly higher prevalence of CHF among general practice patients in large and small rural towns (16.1%) compared with capital city and metropolitan areas (12.4%) (P < 0.001). Echocardiography was used less often for diagnosis in rural towns compared with metropolitan areas (52.0% v 67.3%, P < 0.001). Rates of specialist referral were also significantly lower in rural towns than in metropolitan areas (59.1% v 69.6%, P < 0.001), as were prescribing rates of angiotensin-converting enzyme inhibitors (51.4% v 60.1%, P < 0.001). There was no geographical variation in prescribing rates of β-blockers (12.6% [rural] v 11.8% [metropolitan], P = 0.32). Overall, few survey participants received recommended “evidence-based practice” diagnosis and management for CHF (metropolitan, 4.6%; rural, 3.9%; and remote areas, 3.7%). Conclusions: This study found a higher prevalence of CHF, and significantly lower use of recommended diagnostic methods and pharmacological treatment among patients in rural areas.
Resumo:
Background: To compare the intraocular pressure readings obtained with the iCare rebound tonometer and the 7CR non-contact tonometer with those measured by Goldmann applanation tonometry in treated glaucoma patients. Design: A prospective, cross sectional study was conducted in a private tertiary glaucoma clinic. Participants: 109 (54M:55F) patients including only eyes under medical treatment for glaucoma. Methods: Measurement by Goldmann applanation tonometry, iCare rebound tonometry and 7CR non-contact tonometry. Main Outcome Measures: Intraocular pressure. Results: There were strong correlations between the intraocular pressure measurements obtained with Goldmann and both the rebound and non-contact tonometers (Spearman r values ≥ 0.79, p < 0.001). However, there were small, statistically significant differences between the average readings for each tonometer. For the rebound tonometer, the mean intraocular pressure was slightly higher compared to the Goldmann applanation tonometer in the right eyes (p = 0.02), and similar in the left eyes (p = 0.93) however these differences did not reach statistical significance. The Goldmann correlated measurements from the noncontact tonometer were lower than the average Goldmann reading for both right (p < 0.001) and left (p > 0.01) eyes. The corneal compensated measurements from the non-contact tonometer were significantly higher compared to the other tonometers (p ≤ 0.001). Conclusions: The iCare rebound tonometer and the 7CR non-contact tonometer measure IOP in fundamentally different ways to the Goldmann applanation tonometer. The resulting IOP values vary between the instruments and will need to be considered when comparing clinical versus home acquired measurements.
Resumo:
Objective: To assess the cost-effectiveness of screening, isolation and decolonisation strategies in the control of methicillin-resistant Staphylococcus aureus (MRSA) in intensive care units (ICUs). Design: Economic evaluation. Setting: England and Wales. Population: ICU patients. Main outcome measures: Infections, deaths, costs, quality adjusted life years (QALYs), incremental cost-effectiveness ratios for alternative strategies, net monetary benefits (NMBs). Results: All strategies using isolation but not decolonisation improved health outcomes but increased costs. When MRSA prevalence on admission to the ICU was 5% and the willingness to pay per QALY gained was between £20,000 and £30,000, the best such strategy was to isolate only those patients at high risk of carrying MRSA (either pre-emptively or following identification by admission and weekly MRSA screening using chromogenic agar). Universal admission and weekly screening using polymerase chain reaction (PCR)-based MRSA detection coupled with isolation was unlikely to be cost-effective unless prevalence was high (10% colonised with MRSA on admission to the ICU). All decolonisation strategies improved health outcomes and reduced costs. While universal decolonisation (regardless of MRSA status) was the most cost-effective in the short-term, strategies using screening to target MRSA carriers may be preferred due to reduced risk of selecting for resistance. Amongst such targeted strategies, universal admission and weekly PCR screening coupled with decolonisation with nasal mupirocin was the most cost-effective. This finding was robust to ICU size, MRSA admission prevalence, the proportion of patients classified as high-risk, and the precise value of willingness to pay for health benefits. Conclusions: MRSA control strategies that use decolonisation are likely to be cost-saving in an ICU setting provided resistance is lacking, and combining universal PCR-based screening with decolonisation is likely to represent good value for money if untargeted decolonisation is considered unacceptable. In ICUs where decolonisation is not implemented there is insufficient evidence to support universal MRSA screening outside high prevalence settings.
Resumo:
Background and aim Falls are the leading cause of injury in older adults. Identifying people at risk before they experience a serious fall requiring hospitalisation allows an opportunity to intervene earlier and potentially reduce further falls and subsequent healthcare costs. The purpose of this project was to develop a referral pathway to a community falls-prevention team for older people who had experienced a fall attended by a paramedic service and who were not transported to hospital. It was also hypothesised that providing intervention to this group of clients would reduce future falls-related ambulance call-outs, emergency department presentations and hospital admissions. Methods An education package, referral pathway and follow-up procedures were developed. Both services had regular meetings, and work shadowing with the paramedics was also trialled to encourage more referrals. A range of demographic and other outcome measures were collected to compare people referred through the paramedic pathway and through traditional pathways. Results Internal data from the Queensland Ambulance Service indicated that there were approximately six falls per week by community-dwelling older persons in the eligible service catchment area (south west Brisbane metropolitan area) who were attended to by Queensland Ambulance Service paramedics, but not transported to hospital during the 2-year study period (2008–2009). Of the potential 638 eligible patients, only 17 (2.6%) were referred for a falls assessment. Conclusion Although this pilot programme had support from all levels of management as well as from the service providers, it did not translate into actual referrals. Several explanations are provided for these preliminary findings.
Resumo:
Methodological differences among studies of vasomotor symptoms limit rigorous comparison or systematic review. Vasomotor symptoms generally include hot flushes and night sweats although other associated symptoms exist. Prevalence rates vary between and within populations, but different studies collect data on frequency, bothersomeness, and/or severity using different outcome measures and scales, making comparisons difficult. We reviewed only cross-cultural studies of menopausal symptoms that explicitly examined symptoms in general populations of women in different countries or different ethnic groups in the same country. This resulted in the inclusion of nine studies: Australian/Japanese Midlife Women's Health Study (AJMWHS), Decisions At Menopause Study (DAMeS), Four Major Ethnic Groups (FMEG), Hilo Women's Health Survey (HWHS), Mid-Aged Health in Women from the Indian Subcontinent (MAHWIS), Penn Ovarian Aging Study (POAS), Study of Women's Health Across the Nation (SWAN), Women's Health in Midlife National Study (WHiMNS), and Women's International Study of Health and Sexuality (WISHeS). These studies highlight the methodological challenges involved in conducting multi-population studies, particularly when languages differ, but also highlight the importance of performing multivariate and factor analyses. Significant cultural differences in one or more vasomotor symptoms were observed in 8 of 9 studies, and symptoms were influenced by the following determinants: menopausal status, hormones (and variance), age (or actually, the square of age, age2), BMI, depression, anxiety, poor physical health, perceived stress, lifestyle factors (hormone therapy use, smoking and exposure to passive smoke), and acculturation (in immigrant populations). Recommendations are made to improve methodological rigor and facilitate comparisons in future cross-cultural menopause studies.
Resumo:
Purpose. The Useful Field of View (UFOV(R)) test has been shown to be highly effective in predicting crash risk among older adults. An important question which we examined in this study is whether this association is due to the ability of the UFOV to predict difficulties in attention-demanding driving situations that involve either visual or auditory distracters. Methods. Participants included 92 community-living adults (mean age 73.6 +/- 5.4 years; range 65-88 years) who completed all three subtests of the UFOV involving assessment of visual processing speed (subtest 1), divided attention (subtest 2), and selective attention (subtest 3); driving safety risk was also classified using the UFOV scoring system. Driving performance was assessed separately on a closed-road circuit while driving under three conditions: no distracters, visual distracters, and auditory distracters. Driving outcome measures included road sign recognition, hazard detection, gap perception, time to complete the course, and performance on the distracter tasks. Results. Those rated as safe on the UFOV (safety rating categories 1 and 2), as well as those responding faster than the recommended cut-off on the selective attention subtest (350 msec), performed significantly better in terms of overall driving performance and also experienced less interference from distracters. Of the three UFOV subtests, the selective attention subtest best predicted overall driving performance in the presence of distracters. Conclusions. Older adults who were rated as higher risk on the UFOV, particularly on the selective attention subtest, demonstrated poorest driving performance in the presence of distracters. This finding suggests that the selective attention subtest of the UFOV may be differentially more effective in predicting driving difficulties in situations of divided attention which are commonly associated with crashes.
Resumo:
The purpose of this study was to determine the effects of cryotherapy, in the form of cold water immersion, on knee joint position sense. Fourteen healthy volunteers, with no previous knee injury or pre-existing clinical condition, participated in this randomized cross-over trial. The intervention consisted of a 30-min immersion, to the level of the umbilicus, in either cold (14 ± 1°C) or tepid water(28 ± 1°C). Approximately one week later, in a randomized fashion, the volunteers completed the remaining immersion. Active ipsilateral limb repositioning sense of the right knee was measured, using weight-bearing and non-weight bearing assessments, employing video-recorded 3D motion analysis. These assessments were conducted immediately before and after a cold and tepid water immersion. No significant differences were found between treatments for the absolute (P = 0.29), relative (P = 0.21) or variable error (P = 0.86). The average effect size of the outcome measures was modest (range –0.49 to 0.9) and all the associated 95% confidence intervals for these effect sizes crossed zero. These results indicate that there is no evidence of an enhanced risk of injury, following a return to sporting activity, after cold water.
Resumo:
Objective: To (1) search the English-language literature for original research addressing the effect of cryotherapy on joint position sense (JPS) and (2) make recommendations regarding how soon healthy athletes can safely return to participation after cryotherapy. Data Sources: We performed an exhaustive search for original research using the AMED, CINAHL, MEDLINE, and SportDiscus databases from 1973 to 2009 to gather information on cryotherapy and JPS. Key words used were cryotherapy and proprioception, cryotherapy and joint position sense, cryotherapy, and proprioception. Study Selection: The inclusion criteria were (1) the literature was written in English, (2) participants were human, (3) an outcome measure included JPS, (4) participants were healthy, and (5) participants were tested immediately after a cryotherapy application to a joint. Data Extraction: The means and SDs of the JPS outcome measures were extracted and used to estimate the effect size (Cohen d) and associated 95% confidence intervals for comparisons of JPS before and after a cryotherapy treatment. The numbers, ages, and sexes of participants in all 7 selected studies were also extracted. Data Synthesis: The JPS was assessed in 3 joints: ankle (n 5 2), knee (n 5 3), and shoulder (n 5 2). The average effect size for the 7 included studies was modest, with effect sizes ranging from 20.08 to 1.17, with a positive number representing an increase in JPS error. The average methodologic score of the included studies was 5.4/10 (range, 5–6) on the Physiotherapy Evidence Database scale. Conclusions: Limited and equivocal evidence is available to address the effect of cryotherapy on proprioception in the form of JPS. Until further evidence is provided, clinicians should be cautious when returning individuals to tasks requiring components of proprioceptive input immediately after a cryotherapy treatment.
Resumo:
Background Excessive speed contributes to the incidence and severity of road crashes. The Theory of Planned Behaviour (TPB) has successfully explained variance in speeding intentions and behaviour. However, studies have shown that more than 40% of the variance in outcome measures of speeding remains unexplained, thus, suggesting additional constructs may help to enhance the TPB’s predictive power. Therefore, this study examined mindfulness; a promising construct which has not yet been tested as an additional TPB predictor. Aims The aims of this study were to explore drivers’ beliefs about speeding in school zones using the extended TPB as a framework and to examine the effect that mindfulness had on driver speeding behaviour in school zones. Methods Australian drivers (N = 17) participated in one of four focus group discussions. The overall sample was comprised of five males and twelve females who were aged between 17 to56 years. All participants were recruited via purposive sampling among 1st year psychology students at a large South East Queensland University. The group discussions took approximately one hour and were guided by a structured interview schedule which sought to elicit drivers’ beliefs, thoughts and opinions on speeding in school zones and the factors which motivate such behaviour. Results Overall, thematic analysis revealed some similar issues emerged across the groups. . In particular and perhaps somewhat unsurprisingly, given public concerns regarding the want to ensure the safety of school children, there was much agreement that speeding in school zones was dangerous and unacceptable. Somewhat paradoxically however, some participants also agreed that they had unintentionally or mindlessly sped in school zones. There were several factors that drivers believed influenced their speeding in school zones including their current mood (e.g., if in a bad mood, anxious, or excited they may be more likely to drive without awareness of, and being attentive to, their driving environment) and the extent to which they were familiar with the environment (i.e., more familiar contexts, more likely to drive mindlessly). Thus, although drivers expressed a belief that speeding in school zones was dangerous and acceptable, the extent to which a driver is mindful does influence whether or not a driver may actually engage in speeding in this context. Discussion and conclusions This study highlights the potential role of mindfulness in helping to explain speeding behaviour in school zones. Mindless drivers may speed unintentionally and while unintentional still be endangering the safety and lives of school children. The findings of this research suggest that unintentional speeding, especially in school zones, may be reduced by countermeasures which heighten the extent to which drivers are mindful of approaching and/or driving through a school zone, such as street markings and engineering measures (e.g.,flashing lights and speed bumps).
Resumo:
ABSTRACT Objectives: To investigate the effect of hot and cold temperatures on ambulance attendances. Design: An ecological time series study. Setting and participants: The study was conducted in Brisbane, Australia. We collected information on 783 935 daily ambulance attendances, along with data of associated meteorological variables and air pollutants, for the period of 2000–2007. Outcome measures: The total number of ambulance attendances was examined, along with those related to cardiovascular, respiratory and other non-traumatic conditions. Generalised additive models were used to assess the relationship between daily mean temperature and the number of ambulance attendances. Results: There were statistically significant relationships between mean temperature and ambulance attendances for all categories. Acute heat effects were found with a 1.17% (95% CI: 0.86%, 1.48%) increase in total attendances for 1 °C increase above threshold (0–1 days lag). Cold effects were delayed and longer lasting with a 1.30% (0.87%, 1.73%) increase in total attendances for a 1 °C decrease below the threshold (2–15 days lag). Harvesting was observed following initial acute periods of heat effects, but not for cold effects. Conclusions: This study shows that both hot and cold temperatures led to increases in ambulance attendances for different medical conditions. Our findings support the notion that ambulance attendance records are a valid and timely source of data for use in the development of local weather/health early warning systems.
Resumo:
Objectives: To investigate the effect of hot and cold temperatures on ambulance attendances. Design: An ecological time series study. Setting and participants: The study was conducted in Brisbane, Australia. We collected information on 783 935 daily ambulance attendances, along with data of associated meteorological variables and air pollutants, for the period of 2000–2007. Outcome measures: The total number of ambulance attendances was examined, along with those related to cardiovascular, respiratory and other non-traumatic conditions. Generalised additive models were used to assess the relationship between daily mean temperature and the number of ambulance attendances. Results: There were statistically significant relationships between mean temperature and ambulance attendances for all categories. Acute heat effects were found with a 1.17% (95% CI: 0.86%, 1.48%) increase in total attendances for 1 °C increase above threshold (0–1 days lag). Cold effects were delayed and longer lasting with a 1.30% (0.87%, 1.73%) increase in total attendances for a 1 °C decrease below the threshold (2–15 days lag). Harvesting was observed following initial acute periods of heat effects, but not for cold effects. Conclusions: This study shows that both hot and cold temperatures led to increases in ambulance attendances for different medical conditions. Our findings support the notion that ambulance attendance records are a valid and timely source of data for use in the development of local weather/health early warning systems.
Resumo:
Background and significance: Older adults with chronic diseases are at increasing risk of hospital admission and readmission. Approximately 75% of adults have at least one chronic condition, and the odds of developing a chronic condition increases with age. Chronic diseases consume about 70% of the total Australian health expenditure, and about 59% of hospital events for chronic conditions are potentially preventable. These figures have brought to light the importance of the management of chronic disease among the growing older population. Many studies have endeavoured to develop effective chronic disease management programs by applying social cognitive theory. However, limited studies have focused on chronic disease self-management in older adults at high risk of hospital readmission. Moreover, although the majority of studies have covered wide and valuable outcome measures, there is scant evidence on examining the fundamental health outcomes such as nutritional status, functional status and health-related quality of life. Aim: The aim of this research was to test social cognitive theory in relation to self-efficacy in managing chronic disease and three health outcomes, namely nutritional status, functional status, and health-related quality of life, in older adults at high risk of hospital readmission. Methods: A cross-sectional study design was employed for this research. Three studies were undertaken. Study One examined the nutritional status and validation of a nutritional screening tool; Study Two explored the relationships between participants. characteristics, self-efficacy beliefs, and health outcomes based on the study.s hypothesized model; Study Three tested a theoretical model based on social cognitive theory, which examines potential mechanisms of the mediation effects of social support and self-efficacy beliefs. One hundred and fifty-seven patients aged 65 years and older with a medical admission and at least one risk factor for readmission were recruited. Data were collected from medical records on demographics, medical history, and from self-report questionnaires. The nutrition data were collected by two registered nurses. For Study One, a contingency table and the kappa statistic was used to determine the validity of the Malnutrition Screening Tool. In Study Two, standard multiple regression, hierarchical multiple regression and logistic regression were undertaken to determine the significant influential predictors for the three health outcome measures. For Study Three, a structural equation modelling approach was taken to test the hypothesized self-efficacy model. Results: The findings of Study One suggested that a high prevalence of malnutrition continues to be a concern in older adults as the prevalence of malnutrition was 20.6% according to the Subjective Global Assessment. Additionally, the findings confirmed that the Malnutrition Screening Tool is a valid nutritional screening tool for hospitalized older adults at risk of readmission when compared to the Subjective Global Assessment with high sensitivity (94%), and specificity (89%) and substantial agreement between these two methods (k = .74, p < .001; 95% CI .62-.86). Analysis data for Study Two found that depressive symptoms and perceived social support were the two strongest influential factors for self-efficacy in managing chronic disease in a hierarchical multiple regression. Results of multivariable regression models suggested advancing age, depressive symptoms and less tangible support were three important predictors for malnutrition. In terms of functional status, a standard regression model found that social support was the strongest predictor for the Instrumental Activities of Daily Living, followed by self-efficacy in managing chronic disease. The results of standard multiple regression revealed that the number of hospital readmission risk factors adversely affected the physical component score, while depressive symptoms and self-efficacy beliefs were two significant predictors for the mental component score. In Study Three, the results of the structural equation modelling found that self-efficacy partially mediated the effect of health characteristics and depression on health-related quality of life. The health characteristics had strong direct effects on functional status and body mass index. The results also indicated that social support partially mediated the relationship between health characteristics and functional status. With regard to the joint effects of social support and self-efficacy, social support fully mediated the effect of health characteristics on self-efficacy, and self-efficacy partially mediated the effect of social support on functional status and health-related quality of life. The results also demonstrated that the models fitted the data well with relative high variance explained by the models, implying the hypothesized constructs under discussion were highly relevant, and hence the application for social cognitive theory in this context was supported. Conclusion: This thesis highlights the applicability of social cognitive theory on chronic disease self-management in older adults at risk of hospital readmission. Further studies are recommended to validate and continue to extend the development of social cognitive theory on chronic disease self-management in older adults to improve their nutritional and functional status, and health-related quality of life.