373 resultados para Oja median
Resumo:
Freeways are divided roadways designed to facilitate the uninterrupted movement of motor vehicles. However, many freeways now experience demand flows in excess of capacity, leading to recurrent congestion. The Highway Capacity Manual (TRB, 1994) uses empirical macroscopic relationships between speed, flow and density to quantify freeway operations and performance. Capacity may be predicted as the maximum uncongested flow achievable. Although they are effective tools for design and analysis, macroscopic models lack an understanding of the nature of processes taking place in the system. Szwed and Smith (1972, 1974) and Makigami and Matsuo (1990) have shown that microscopic modelling is also applicable to freeway operations. Such models facilitate an understanding of the processes whilst providing for the assessment of performance, through measures of capacity and delay. However, these models are limited to only a few circumstances. The aim of this study was to produce more comprehensive and practical microscopic models. These models were required to accurately portray the mechanisms of freeway operations at the specific locations under consideration. The models needed to be able to be calibrated using data acquired at these locations. The output of the models needed to be able to be validated with data acquired at these sites. Therefore, the outputs should be truly descriptive of the performance of the facility. A theoretical basis needed to underlie the form of these models, rather than empiricism, which is the case for the macroscopic models currently used. And the models needed to be adaptable to variable operating conditions, so that they may be applied, where possible, to other similar systems and facilities. It was not possible to produce a stand-alone model which is applicable to all facilities and locations, in this single study, however the scene has been set for the application of the models to a much broader range of operating conditions. Opportunities for further development of the models were identified, and procedures provided for the calibration and validation of the models to a wide range of conditions. The models developed, do however, have limitations in their applicability. Only uncongested operations were studied and represented. Driver behaviour in Brisbane was applied to the models. Different mechanisms are likely in other locations due to variability in road rules and driving cultures. Not all manoeuvres evident were modelled. Some unusual manoeuvres were considered unwarranted to model. However the models developed contain the principal processes of freeway operations, merging and lane changing. Gap acceptance theory was applied to these critical operations to assess freeway performance. Gap acceptance theory was found to be applicable to merging, however the major stream, the kerb lane traffic, exercises only a limited priority over the minor stream, the on-ramp traffic. Theory was established to account for this activity. Kerb lane drivers were also found to change to the median lane where possible, to assist coincident mergers. The net limited priority model accounts for this by predicting a reduced major stream flow rate, which excludes lane changers. Cowan's M3 model as calibrated for both streams. On-ramp and total upstream flow are required as input. Relationships between proportion of headways greater than 1 s and flow differed for on-ramps where traffic leaves signalised intersections and unsignalised intersections. Constant departure onramp metering was also modelled. Minimum follow-on times of 1 to 1.2 s were calibrated. Critical gaps were shown to lie between the minimum follow-on time, and the sum of the minimum follow-on time and the 1 s minimum headway. Limited priority capacity and other boundary relationships were established by Troutbeck (1995). The minimum average minor stream delay and corresponding proportion of drivers delayed were quantified theoretically in this study. A simulation model was constructed to predict intermediate minor and major stream delays across all minor and major stream flows. Pseudo-empirical relationships were established to predict average delays. Major stream average delays are limited to 0.5 s, insignificant compared with minor stream delay, which reach infinity at capacity. Minor stream delays were shown to be less when unsignalised intersections are located upstream of on-ramps than signalised intersections, and less still when ramp metering is installed. Smaller delays correspond to improved merge area performance. A more tangible performance measure, the distribution of distances required to merge, was established by including design speeds. This distribution can be measured to validate the model. Merging probabilities can be predicted for given taper lengths, a most useful performance measure. This model was also shown to be applicable to lane changing. Tolerable limits to merging probabilities require calibration. From these, practical capacities can be estimated. Further calibration is required of traffic inputs, critical gap and minimum follow-on time, for both merging and lane changing. A general relationship to predict proportion of drivers delayed requires development. These models can then be used to complement existing macroscopic models to assess performance, and provide further insight into the nature of operations.
Resumo:
BACKGROUND. Physical symptoms are common in pregnancy and are predominantly associated with normal physiological changes. These symptoms have a social and economic cost, leading to absenteeism from work and additional medical interventions. There is currently no simple method for identifying common pregnancy related problems in the antenatal period. A validated tool, for use by pregnancy care providers would be useful. AIM: The aim of the project was to develop and validate a Pregnancy Symptoms Inventory for use by healthcare professionals (HCPs). METHODS: A list of symptoms was generated via expert consultation with midwives and obstetrician gynaecologists. Focus groups were conducted with pregnant women in their first, second or third trimester. The inventory was then tested for face validity and piloted for readability and comprehension. For test-re-test reliability, it was administered to the same women 2 to 3 days apart. Finally, outpatient midwives trialled the inventory for 1 month and rated its usefulness on a 10cm visual analogue scale (VAS). The number of referrals to other health care professionals was recorded during this month. RESULTS: Expert consultation and focus group discussions led to the generation of a 41-item inventory. Following face validity and readability testing, several items were modified. Individual item test re-test reliability was between .51 to 1 with the majority (34 items) scoring .0.70. During the testing phase, 211 surveys were collected in the 1 month trial. Tiredness (45.5%), poor sleep (27.5%) back pain (19.5%) and nausea (12.6%) were experienced often. Among the women surveyed, 16.2% claimed to sometimes or often be incontinent. Referrals to the incontinence nurse increased > 8 fold during the study period. The median rating by midwives of the ‘usefulness’ of the inventory was 8.4 (range 0.9 to 10). CONCLUSIONS: The Pregnancy Symptoms Inventory (PSI) was well accepted by women in the 1 month trial and may be a useful tool for pregnancy care providers and aids clinicians in early detection and subsequent treatment of symptoms. It shows promise for use in the research community for assessing the impact of lifestyle intervention in pregnancy.
Resumo:
Aim To identify relationships between preventive activities, psychosocial factors and leg ulcer recurrence in patients with chronic venous leg ulcers. Background Chronic venous leg ulcers are slow to heal and frequently recur, resulting in years of suffering and intensive use of health care resources. Methods A prospective longitudinal study was undertaken with a sample of 80 patients with a venous leg ulcer recruited when their ulcer healed. Data were collected from 2006–2009 from medical records on demographics, medical history and ulcer history; and from self-report questionnaires on physical activity, nutrition, preventive activities and psychosocial measures. Follow-up data were collected via questionnaires every three months for 12 months after healing. Median time to recurrence was calculated using the Kaplan-Meier method. A Cox proportional-hazards regression model was used to adjust for potential confounders and determine effects of preventive strategies and psychosocial factors on recurrence. Results: There were 35 recurrences in a sample of 80 participants. Median time to recurrence was 27 weeks. After adjustment for potential confounders, a Cox proportional hazards regression model found that at least an hour/day of leg elevation, six or more days/week in Class 2 (20–25mmHg) or 3 (30–40mmHg) compression hosiery, higher social support scale scores and higher General Self-Efficacy scores remained significantly associated (p<0.05) with a lower risk of recurrence, while male gender and a history of DVT remained significant risk factors for recurrence. Conclusion Results indicate that leg elevation, compression hosiery, high levels of self-efficacy and strong social support will help prevent recurrence.
Resumo:
Abstract As regional and continental carbon balances of terrestrial ecosystems become available, it becomes clear that the soils are the largest source of uncertainty. Repeated inventories of soil organic carbon (SOC) organized in soil monitoring networks (SMN) are being implemented in a number of countries. This paper reviews the concepts and design of SMNs in ten countries, and discusses the contribution of such networks to reducing the uncertainty of soil carbon balances. Some SMNs are designed to estimate country-specific land use or management effects on SOC stocks, while others collect soil carbon and ancillary data to provide a nationally consistent assessment of soil carbon condition across the major land-use/soil type combinations. The former use a single sampling campaign of paired sites, while for the latter both systematic (usually grid based) and stratified repeated sampling campaigns (5–10 years interval) are used with densities of one site per 10–1,040 km². For paired sites, multiple samples at each site are taken in order to allow statistical analysis, while for the single sites, composite samples are taken. In both cases, fixed depth increments together with samples for bulk density and stone content are recommended. Samples should be archived to allow for re-measurement purposes using updated techniques. Information on land management, and where possible, land use history should be systematically recorded for each site. A case study of the agricultural frontier in Brazil is presented in which land use effect factors are calculated in order to quantify the CO2 fluxes from national land use/management conversion matrices. Process-based SOC models can be run for the individual points of the SMN, provided detailed land management records are available. These studies are still rare, as most SMNs have been implemented recently or are in progress. Examples from the USA and Belgium show that uncertainties in SOC change range from 1.6–6.5 Mg C ha−1 for the prediction of SOC stock changes on individual sites to 11.72 Mg C ha−1 or 34% of the median SOC change for soil/land use/climate units. For national SOC monitoring, stratified sampling sites appears to be the most straightforward attribution of SOC values to units with similar soil/land use/climate conditions (i.e. a spatially implicit upscaling approach). Keywords Soil monitoring networks - Soil organic carbon - Modeling - Sampling design
Resumo:
Loss of home is common to all people from a refugee background yet we have little understanding of the diversity of meaning associated with this important concept. A phenomenological approach was used to explore experiences of home amongst Karen and Chin refugees residing in Brisbane. In-depth, semi-structured interviews were conducted with nine participants from Karen and Chin backgrounds. The participants comprised five females and four males (mean age 40 years, median length of time in Australia 1.33 years). Participants described their migration stories, including pre- and post-migration history. Analysis was conducted using interpretative phenomenological analysis. Three superordinate themes, explicating the meaning of home for participants, were identified: home as the experience of a psychological space of safety and retreat; home as the socio-emotional space of relatedness to family; and home as geographical-emotional landscape. Loss of home was experienced as a multidimensional loss associated with emotional and physical disturbances. These findings, based upon a phenomenological paradigm, enhance understanding of the experience of being a refugee and of the suffering engendered by loss of home. They open up the possibility for conceptualizing refugee responses in terms of human suffering and meaning making.
Resumo:
Objective Laser Doppler imaging (LDI) was compared to wound outcomes in children's burns, to determine if the technology could be used to predict these outcomes. Methods Forty-eight patients with a total of 85 burns were included in the study. Patient median age was 4 years 10 months and scans were taken 0–186 h post-burn using the fast, low-resolution setting on the Moor LDI2 laser Doppler imager. Wounds were managed by standard practice, without taking into account the scan results. Time until complete re-epithelialisation and whether or not grafting and scar management were required were recorded for each wound. If wounds were treated with Silvazine™ or Acticoat™ prior to the scan, this was also recorded. Results The predominant colour of the scan was found to be significantly related to the re-epithelialisation, grafting and scar management outcomes and could be used to predict those outcomes. The prior use of Acticoat™ did not affect the scan relationship to outcomes, however, the use of Silvazine™ did complicate the relationship for light blue and green scanned partial thickness wounds. Scans taken within the 24-h window after-burn also appeared to be accurate predictors of wound outcome. Conclusion Laser Doppler imaging is accurate and effective in a paediatric population with a low-resolution fast-scan.
Resumo:
INTRODUCTION: Anorexia nervosa (AN) is a growing problem among young female Singaporeans. We studied the demographics and follow-up data of AN patients referred to dietitians for nutritional intervention. METHODS: A retrospective nutritional notes review was done on 94 patients seen from 1992 to 2004. All patients were given nutritional intervention, which included individualised counselling for weight gain, personalised diet plan, correction of poor dietary intake and correction of perception towards healthy eating. We collected data on body mass index (BMI), patient demographics and outcome. RESULTS: 96 percent of the patients were female and 86.2 percent were Chinese. The median BMI at initial consultation was 14.7 kilogramme per square metre (range, 8.6-18.8 kilogramme per square metre). 76 percent were between 13 and 20 years old. 83 percent of the patients came back for follow-up appointments with the dietitians in addition to consultation with the psychiatrist. Overall, there was significant improvement in weight and BMI from an average 37 kg to 41 kg and 14.7 kilogramme per square metre to 16.4 kilogramme per square metre, respectively, between the fi rst and fi nal consultations (p-value is less than 0.001). The average duration of followup was about eight months. Among the patients on follow-up, 68 percent showed improvement with an average weight gain of 6 kg. Patients that improved had more outpatient follow-up sessions with the dietitians (4.2 consultations versus 1.6 consultations; p-value is less than 0.05), lower BMI at presentation (14.2 kilogramme per square metre versus 15.7 kilogramme per square metre; p-value is less than 0.01) and shorter duration of disease at presentation (one year versus three years; p-value is less than 0.05) compared with those who did not improve. Seven patients with the disease for more than two years did not show improvement with follow-up. CONCLUSION: We gained valuable understanding of the AN patients referred to our tertiary hospital for treatment, two-thirds of whom improved with adequate follow-up treatment. Patients that had suffered AN longer before seeking help appeared more resistant to improvement.
Resumo:
Older adults, especially those acutely ill, are vulnerable to developing malnutrition due to a range of risk factors. The high prevalence and extensive consequences of malnutrition in hospitalised older adults have been reported extensively. However, there are few well-designed longitudinal studies that report the independent relationship between malnutrition and clinical outcomes after adjustment for a wide range of covariates. Acutely ill older adults are exceptionally prone to nutritional decline during hospitalisation, but few reports have studied this change and impact on clinical outcomes. In the rapidly ageing Singapore population, all this evidence is lacking, and the characteristics associated with the risk of malnutrition are also not well-documented. Despite the evidence on malnutrition prevalence, it is often under-recognised and under-treated. It is therefore crucial that validated nutrition screening and assessment tools are used for early identification of malnutrition. Although many nutrition screening and assessment tools are available, there is no universally accepted method for defining malnutrition risk and nutritional status. Most existing tools have been validated amongst Caucasians using various approaches, but they are rarely reported in the Asian elderly and none has been validated in Singapore. Due to the multiethnicity, cultural, and language differences in Singapore older adults, the results from non-Asian validation studies may not be applicable. Therefore it is important to identify validated population and setting specific nutrition screening and assessment methods to accurately detect and diagnose malnutrition in Singapore. The aims of this study are therefore to: i) characterise hospitalised elderly in a Singapore acute hospital; ii) describe the extent and impact of admission malnutrition; iii) identify and evaluate suitable methods for nutritional screening and assessment; and iv) examine changes in nutritional status during admission and their impact on clinical outcomes. A total of 281 participants, with a mean (+SD) age of 81.3 (+7.6) years, were recruited from three geriatric wards in Tan Tock Seng Hospital over a period of eight months. They were predominantly Chinese (83%) and community-dwellers (97%). They were screened within 72 hours of admission by a single dietetic technician using four nutrition screening tools [Tan Tock Seng Hospital Nutrition Screening Tool (TTSH NST), Nutritional Risk Screening 2002 (NRS 2002), Mini Nutritional Assessment-Short Form (MNA-SF), and Short Nutritional Assessment Questionnaire (SNAQ©)] that were administered in no particular order. The total scores were not computed during the screening process so that the dietetic technician was blinded to the results of all the tools. Nutritional status was assessed by a single dietitian, who was blinded to the screening results, using four malnutrition assessment methods [Subjective Global Assessment (SGA), Mini Nutritional Assessment (MNA), body mass index (BMI), and corrected arm muscle area (CAMA)]. The SGA rating was completed prior to computation of the total MNA score to minimise bias. Participants were reassessed for weight, arm anthropometry (mid-arm circumference, triceps skinfold thickness), and SGA rating at discharge from the ward. The nutritional assessment tools and indices were validated against clinical outcomes (length of stay (LOS) >11days, discharge to higher level care, 3-month readmission, 6-month mortality, and 6-month Modified Barthel Index) using multivariate logistic regression. The covariates included age, gender, race, dementia (defined using DSM IV criteria), depression (defined using a single question “Do you often feel sad or depressed?”), severity of illness (defined using a modified version of the Severity of Illness Index), comorbidities (defined using Charlson Comorbidity Index, number of prescribed drugs and admission functional status (measured using Modified Barthel Index; MBI). The nutrition screening tools were validated against the SGA, which was found to be the most appropriate nutritional assessment tool from this study (refer section 5.6) Prevalence of malnutrition on admission was 35% (defined by SGA), and it was significantly associated with characteristics such as swallowing impairment (malnourished vs well-nourished: 20% vs 5%), poor appetite (77% vs 24%), dementia (44% vs 28%), depression (34% vs 22%), and poor functional status (MBI 48.3+29.8 vs 65.1+25.4). The SGA had the highest completion rate (100%) and was predictive of the highest number of clinical outcomes: LOS >11days (OR 2.11, 95% CI [1.17- 3.83]), 3-month readmission (OR 1.90, 95% CI [1.05-3.42]) and 6-month mortality (OR 3.04, 95% CI [1.28-7.18]), independent of a comprehensive range of covariates including functional status, disease severity and cognitive function. SGA is therefore the most appropriate nutritional assessment tool for defining malnutrition. The TTSH NST was identified as the most suitable nutritional screening tool with the best diagnostic performance against the SGA (AUC 0.865, sensitivity 84%, specificity 79%). Overall, 44% of participants experienced weight loss during hospitalisation, and 27% had weight loss >1% per week over median LOS 9 days (range 2-50). Wellnourished (45%) and malnourished (43%) participants were equally prone to experiencing decline in nutritional status (defined by weight loss >1% per week). Those with reduced nutritional status were more likely to be discharged to higher level care (adjusted OR 2.46, 95% CI [1.27-4.70]). This study is the first to characterise malnourished hospitalised older adults in Singapore. It is also one of the very few studies to (a) evaluate the association of admission malnutrition with clinical outcomes in a multivariate model; (b) determine the change in their nutritional status during admission; and (c) evaluate the validity of nutritional screening and assessment tools amongst hospitalised older adults in an Asian population. Results clearly highlight that admission malnutrition and deterioration in nutritional status are prevalent and are associated with adverse clinical outcomes in hospitalised older adults. With older adults being vulnerable to risks and consequences of malnutrition, it is important that they are systematically screened so timely and appropriate intervention can be provided. The findings highlighted in this thesis provide an evidence base for, and confirm the validity of the current nutrition screening and assessment tools used among hospitalised older adults in Singapore. As the older adults may have developed malnutrition prior to hospital admission, or experienced clinically significant weight loss of >1% per week of hospitalisation, screening of the elderly should be initiated in the community and continuous nutritional monitoring should extend beyond hospitalisation.
Resumo:
Aim: To review the management of heart failure in patients not enrolled in specialist multidisciplinary programs. Method: A prospective clinical audit of patients admitted to hospital with either a current or past diagnosis of heart failure and not enrolled in a specialist heart failure program or under the direct care of the cardiology unit. Results: 81 eligible patients were enrolled (1 August to 1 October 2008). The median age was 81 9.4 years and 48% were male. Most patients (63%) were in New York Heart Association Class II or Class III heart failure. On discharge, 59% of patients were prescribed angiotensin converting enzyme inhibitors and 43% were prescribed beta-blockers. During hospitalisation, 8.6% of patients with a past diagnosis of heart failure were started on an angiotensin converting enzyme inhibitor and 4.9% on a beta-blocker. There was evidence of suboptimal dosage on admission and discharge for angiotensin converting enzyme inhibitors (19% and 7.4%) and beta-blockers (29% and 17%). The results compared well with international reports regarding the under-treatment of heart failure. Conclusion: The demonstrated practice gap provides excellent opportunities for the involvement of pharmacists to improve the continuation of care for heart failure patients discharged from hospital in the areas of medication management review, dose titration and monitoring.
Resumo:
Background: The aims of this study were to determine the documentation of pharmacotherapy optimization goals in the discharge letters of patients with the principal diagnosis of chronic heart failure. Methods: A retrospective practice audit of 212 patients discharged to the care of their local general practitioner from general medical units of a large tertiary hospital. Details of recommendations regarding ongoing pharmacological and non-pharmacological management were reviewed. The doses of medications on discharge were noted and whether they met current guidelines recommending titration of angiotensin-converting enzyme inhibitors and beta-blockers. Ongoing arrangements for specialist follow up were also reviewed. Results: The mean age of patients whose letters were reviewed was 78.4 years (standard deviation ± 8.6); 50% were men. Patients had an overall median of six comorbidities and eight regular medications on discharge. Mean length of stay for each admission was 6 days. Discharge letters were posted a median of 4 days after discharge, with 25% not posted at 10 days. No discharge letter was sent in 9.4% (20) of the cases. Only six (2.8%) letters had any recommendations regarding future titration of angiotensin-converting enzyme inhibitors and 6.6% (14) for beta-blockers. Recommendations for future non-pharmacological management, for example, diuretic action plans, regular weight monitoring and exercise plans were not found in the letters in this audit. Conclusion: Hospital discharge is an opportunity to communicate management plans for treatment optimization effectively, and while this opportunity is spurned, implementation gaps in the management of cardiac failure will probably remain.
Resumo:
Objective: To compare the location and accessibility of current Australian chronic heart failure (CHF) management programs and general practice services with the probable distribution of the population with CHF. Design and setting: Data on the prevalence and distribution of the CHF population throughout Australia, and the locations of CHF management programs and general practice services from 1 January 2004 to 31 December 2005 were analysed using geographic information systems (GIS) technology. Outcome measures: Distance of populations with CHF to CHF management programs and general practice services. Results: The highest prevalence of CHF (20.3–79.8 per 1000 population) occurred in areas with high concentrations of people over 65 years of age and in areas with higher proportions of Indigenous people. Five thousand CHF patients (8%) discharged from hospital in 2004–2005 were managed in one of the 62 identified CHF management programs. There were no CHF management programs in the Northern Territory or Tasmania. Only four CHF management programs were located outside major cities, with a total case load of 80 patients (0.7%). The mean distance from any Australian population centre to the nearest CHF management program was 332 km (median, 163 km; range, 0.15–3246 km). In rural areas, where the burden of CHF management falls upon general practitioners, the mean distance to general practice services was 37 km (median, 20 km; range, 0–656 km). Conclusion: There is an inequity in the provision of CHF management programs to rural Australians.
Resumo:
Exercise interventions during adjuvant cancer treatment have been shown to increase functional capacity, relieve fatigue and distress and in one recent study, assist chemotherapy completion. These studies have been limited to breast, prostate or mixed cancer groups and it is not yet known if a similar intervention is even feasible among women diagnosed with ovarian cancer. Women undergoing treatment for ovarian cancer commonly have extensive pelvic surgery followed by high intensity chemotherapy. It is hypothesized that women with ovarian cancer may benefit most from a customised exercise intervention during chemotherapy treatment. This could reduce the number and severity of chemotherapy-related side-effects and optimize treatment adherence. Hence, the aim of the research was to assess feasibility and acceptability of a walking intervention in women with ovarian cancer whilst undergoing chemotherapy, as well as pre-post intervention changes in a range of physical and psychological outcomes. Newly diagnosed women with ovarian cancer were recruited from the Royal Brisbane and Women’s Hospital (RBWH), to participate in a walking program throughout chemotherapy. The study used a one group pre- post-intervention test design. Baseline (conducted following surgery but prior to the first or second chemotherapy cycles) and follow-up (conducted three weeks after the last chemotherapy dose was received) assessments were performed. To accommodate changes in side-effects associated with treatment, specific weekly walking targets with respect to frequency, intensity and duration, were individualised for each participant. To assess feasibility, adherence and compliance with prescribed walking sessions, withdrawals and adverse events were recorded. Physical and psychological outcomes assessed included functional capacity, body composition, anxiety and depression, symptoms experienced during treatment and quality of life. Chemotherapy completion data was also documented and self-reported program helpfulness was assessed using a questionnaire post intervention. Forty-two women were invited to participate. Nine women were recruited, all of whom completed the program. There were no adverse events associated with participating in the intervention and all women reported that the walking program was helpful during their neo-adjuvant or adjuvant chemotherapy treatment. Adherence and compliance to the walking prescription was high. On average, women achieved at least two of their three individual weekly prescription targets 83% of the time (range 42% to 94%). Positive changes were found in functional capacity and quality of life, in addition to reductions in the number and intensity of treatment-associated symptoms over the course of the intervention period. Functional capacity increased for all nine women from baseline to follow-up assessment, with improvements ranging from 10% to 51%. Quality of life improvements were also noted, especially in the physical well-being scale (baseline: median 18; follow-up: median 23). Treatment symptoms reduced in presence and severity, specifically, in constipation, pain and fatigue, post intervention. These positive yet preliminary results suggest that a walking intervention for women receiving chemotherapy for ovarian cancer is safe, feasible and acceptable. Importantly, women perceived the program to be helpful and rewarding, despite being conducted during a time typically associated with elevated distress and treatment symptoms that are often severe enough to alter or cease chemotherapy prescription.
Resumo:
Background: Chronic diseases including type 2 diabetes are a leading cause of morbidity and mortality in midlife and older Australian women. There are a number of modifiable risk factors for type 2 diabetes and other chronic diseases including smoking, nutrition, physical activity and overweight and obesity. Little research has been conducted in the Australian context to explore the perceived barriers to health promotion activities in midlife and older Australian women with a chronic disease. Aims: The primary aim of this study was to explore women’s perceived barriers to health promotion activities to reduce modifiable risk factors, and the relationship of perceived barriers to smoking behaviour, fruit and vegetable intake, physical activity and body mass index. A secondary aim of this study was to investigate nurses’ perceptions of the barriers to action for women with a chronic disease, and to compare those perceptions with those of the women. Methods: The study was divided into two phases where Phase 1 was a cross sectional survey of women, aged over 45 years with type 2 diabetes who were attending Diabetes clinics in the Primary and Community Health Service of the Metro North Health Service District of Queensland Health (N = 22). The women were a subsample of women participating in a multi-model lifestyle intervention, the ‘Reducing Chronic Disease among Adult Australian Women’ project. Phase 2 of the study was a cross sectional online survey of nurses working in Primary and Community Health Service in the Metro North Health Service District of Queensland Health (N = 46). Pender’s health promotion model was used as the theoretical framework for this study. Results: Women in this study had an average total barriers score of 32.18 (SD = 9.52) which was similar to average scores reported in the literature for women with a range of physical disabilities and illnesses. The leading five barriers for this group of women were: concern about safety; too tired; not interested; lack of information about what to do; with lack of time and feeling I can’t do things correctly the equal fifth ranked barriers. In this study there was no statistically significant difference in average total barriers scores between women in the intervention group and those is the usual care group of the parent study. There was also no significant relationship between the women’s socio-demographic variables and lifestyle risk factors and their level of perceived barriers. Nurses in the study had an average total barriers score of 44.48 (SD = 6.24) which was higher than all other average scores reported in the literature. The leading five barriers that nurses perceived were an issue for women with a chronic disease were: lack of time and interferes with other responsibilities the leading barriers; embarrassment about appearance; lack of money; too tired and lack of support from family and friends. There was no significant relationship between the nurses’ sociodemographic and nursing variables and the level of perceived barriers. When comparing the results of women and nurses in the study there was a statistically significant difference in the median total barriers score between the groups (p < 0.001), where the nurses perceived the barriers to be higher (Md = 43) than the women (Md = 33). There was also a significant difference in the responses to the individual barriers items in fifteen of the eighteen items (p < 0.002). Conclusion: Although this study is limited by a small sample size, it contributes to understanding the perception of midlife and older women with a chronic disease and also the perception of nurses, about the barriers to healthy lifestyle activities that women face. The study provides some evidence that the perceptions of women and nurses may differ and argues that these differences may have significant implications for clinical practice. The study recommends a greater emphasis on assessing and managing perceived barriers to health promotion activities in health education and policy development and proposes a conceptual model for understanding perceived barriers to action.
Resumo:
Prior to the GFC, Brisbane and Perth were experiencing the highest increases in median residential house prices, compared to the other major Australian cities, due to strong demand for both owner occupied and investment residential property. In both these cities, a major driver of this demand and subsequent increases in residential property prices was the strong resources sector. With the onset of the GFC in 2008, the resources and construction sectors in Queensland contracted significantly and this had both direct and indirect impacts on the Brisbane residential property market. However, this impact was not consistent across Brisbane residential property sectors. The affect on houses and units differed, as did the impact based on geographic location and suburb value. This paper tracks Brisbane residential property sales listings, sales and returns over the period February 2009 to July 2010 and provides an analysis of the residential market for 24 Brisbane suburbs. These suburbs cover main residential areas of Brisbane and are based on an equal number of low, medium and high socioeconomic areas of Brisbane. This assessment of socio-economic status for the suburbs is based on both median household income and median house price. The analysis will cover both free standing residential property and residential units/townhouses/villas. The results will show how each of these residential property sub markets have performed following the GFC.
Resumo:
Sustainability concerns every citizen. Housing affordability and sustainable solutions are being highlighted in research and practice in many parts of the world. This paper discusses the development of a Commuter Energy and Building Utilities System (CEBUS) in sustainable housing projects as a means of bridging the gap between current median house pricing and target affordable house pricing for low income earners. Similar scales of sustainable housing development cannot be achieved through independent application of current best practice methods in ecologically sustainable development strategies or transit oriented development master plans. This paper presents the initial stage of research on first capital and ongoing utilities and transport cost savings available from these sustainable design methods. It also outlines further research and development of a CEBUS Dynamic Simulation Model and Conceptual Framework for the Australian property development and construction industry.