173 resultados para long-term aged care
Resumo:
Recent years have seen the introduction of formalised accreditation processes in both community and residential aged care, but these only partially address quality assessment within this sector. Residential aged care in Australia does not yet have a standardised system of resident assessment related to clinical, rather than administrative, outcomes. This paper describes the development of a quality assessment tool aimed at addressing this gap. Utilising previous research and the results of nominal groups with experts in the field, the 21-item Clinical Care Indicators (CCI) Tool for residential aged care was developed and trialled nationally. The CCI Tool was found to be simple to use and an effective means of collecting data on the state of resident health and care, with potential benefits for resident care planning and continuous quality improvement within facilities and organisations. The CCI Tool was further refined through a small intervention study to assess its utility as a quality improvement instrument and to investigate its relationship with resident quality of life. The current version covers 23 clinical indicators, takes about 30 minutes to complete and is viewed favourably by nursing staff who use it. Current work focuses on psychometric analysis and benchmarking, which should enable the CCI Tool to make a positive contribution to the measurement of quality in aged care in Australia.
Resumo:
We describe the design, development and learnings from the first phase of a rainforest ecological sensor network at Springbrook - part of a World Heritage precinct in South East Queensland. This first phase is part of a major initiative to develop the capability to provide reliable, long-term monitoring of rainforest ecosystems. We focus in particular on our analysis around energy and communication challenges which need to be solved to allow for reliable, long-term deployments in these types of environments.
Resumo:
The efficiency of agricultural management practices to store SOC depends on C input level and how far a soil is from its saturation level (i.e. saturation deficit). The C Saturation hypothesis suggests an ultimate soil C stabilization capacity defined by four SOM pools capable of C saturation: (1) non-protected, (2) physically protected, (3) chemically protected and (4) biochemically protected. We tested if C saturation deficit and the amount of added C influenced SOC storage in measurable soil fractions corresponding to the conceptual chemical, physical, biochemical, and non-protected C pools. We added two levels of C-13- labeled residue to soil samples from seven agricultural sites that were either closer to (i.e., A-horizon) or further from (i.e., C-horizon) their C saturation level and incubated them for 2.5 years. Residue-derived C stabilization was, in most sites, directly related to C saturation deficit but mechanisms of C stabilization differed between the chemically and biochemically protected pools. The physically protected C pool showed a varied effect of C saturation deficit on C-13 stabilization, due to opposite behavior of the POM and mineral fractions. We found distinct behavior between unaggregated and aggregated mineral-associated fractions emphasizing the mechanistic difference between the chemically and physically protected C-pools. To accurately predict SOC dynamics and stabilization, C Saturation of soil C pools, particularly the chemically and biochemically protected pools, should be considered. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
Microorganisms play key roles in biogeochemical cycling by facilitating the release of nutrients from organic compounds. In doing so, microbial communities use different organic substrates that yield different amounts of energy for maintenance and growth of the community. Carbon utilization efficiency (CUE) is a measure of the efficiency with which substrate carbon is metabolized versus mineralized by the microbial biomass. In the face of global change, we wanted to know how temperature affected the efficiency by which the soil microbial community utilized an added labile substrate, and to determine the effect of labile soil carbon depletion (through increasing duration of incubation) on the community's ability to respond to an added substrate. Cellobiose was added to soil samples as a model compound at several times over the course of a long-term incubation experiment to measure the amount of carbon assimilated or lost as CO2 respiration. Results indicated that in all cases, the time required for the microbial community to take up the added substrate increased as incubation time prior to substrate addition increased. However, the CUE was not affected by incubation time. Increased temperature generally decreased CUE, thus the microbial community was more efficient at 15 degrees C than at 25 degrees C. These results indicate that at warmer temperatures microbial communities may release more CO2 per unit of assimilated carbon. Current climate-carbon models have a fixed CUE to predict how much CO2 will be released as soil organic matter is decomposed. Based on our findings, this assumption may be incorrect due to variation of CUE with changing temperature. (c) 2008 Elsevier Ltd. All rights reserved.
Resumo:
Although current assessments of agricultural management practices on soil organic C (SOC) dynamics are usually conducted without any explicit consideration of limits to soil C storage, it has been hypothesized that the SOC pool has an upper, or saturation limit with respect to C input levels at steady state. Agricultural management practices that increase C input levels over time produce a new equilibrium soil C content. However, multiple C input level treatments that produce no increase in SOC stocks at equilibrium show that soils have become saturated with respect to C inputs. SOC storage of added C input is a function of how far a soil is from saturation level (saturation deficit) as well as C input level. We tested experimentally if C saturation deficit and varying C input levels influenced soil C stabilization of added C-13 in soils varying in SOC content and physiochemical characteristics. We incubated for 2.5 years soil samples from seven agricultural sites that were closer to (i.e., A-horizon) or further from (i.e., C-horizon) their C saturation limit. At the initiation of the incubations, samples received low or high C input levels of 13 C-labeled wheat straw. We also tested the effect of Ca addition and residue quality on a subset of these soils. We hypothesized that the proportion of C stabilized would be greater in samples with larger C Saturation deficits (i.e., the C- versus A-horizon samples) and that the relative stabilization efficiency (i.e., Delta SCC/Delta C input) would decrease as C input level increased. We found that C saturation deficit influenced the stabilization of added residue at six out of the seven sites and C addition level affected the stabilization of added residue in four sites, corroborating both hypotheses. Increasing Ca availability or decreasing residue quality had no effect on the stabilization of added residue. The amount of new C stabilized was significantly related to C saturation deficit, supporting the hypothesis that C saturation influenced C stabilization at all our sites. Our results suggest that soils with low C contents and degraded lands may have the greatest potential and efficiency to store added C because they are further from their saturation level. (c) 2008 Elsevier Ltd. All rights reserved.
Resumo:
No-tillage (NT) management has been promoted as a practice capable of offsetting greenhouse gas (GHG) emissions because of its ability to sequester carbon in soils. However, true mitigation is only possible if the overall impact of NT adoption reduces the net global warming potential (GWP) determined by fluxes of the three major biogenic GHGs (i.e. CO2, N2O, and CH4). We compiled all available data of soil-derived GHG emission comparisons between conventional tilled (CT) and NT systems for humid and dry temperate climates. Newly converted NT systems increase GWP relative to CT practices, in both humid and dry climate regimes, and longer-term adoption (>10 years) only significantly reduces GWP in humid climates. Mean cumulative GWP over a 20-year period is also reduced under continuous NT in dry areas, but with a high degree of uncertainty. Emissions of N2O drive much of the trend in net GWP, suggesting improved nitrogen management is essential to realize the full benefit from carbon storage in the soil for purposes of global warming mitigation. Our results indicate a strong time dependency in the GHG mitigation potential of NT agriculture, demonstrating that GHG mitigation by adoption of NT is much more variable and complex than previously considered, and policy plans to reduce global warming through this land management practice need further scrutiny to ensure success.
Resumo:
To undertake exploratory benchmarking of a set of clinical indicators of quality care in residential care in Australia, data were collected from 107 residents within four medium-sized facilities (40–80 beds) in Brisbane, Australia. The proportion of residents in each sample facility with a particular clinical problem was compared with US Minimum Data Set quality indicator thresholds. Results demonstrated variability within and between clinical indicators, suggesting breadth of assessment using various clinical indicators of quality is an important factor when monitoring quality of care. More comprehensive and objective measures of quality of care would be of great assistance in determining and monitoring the effectiveness of residential aged care provision in Australia, particularly as demands for accountability by consumers and their families increase. What is known about the topic? The key to quality improvement is effective quality assessment, and one means of evaluating quality of care is through clinical outcomes. The Minimum Data Set quality indicators have been credited with improving quality in United States nursing homes. What does this paper add? The Clinical Care Indicators Tool was used to collect data on clinical outcomes, enabling comparison of data from a small Australian sample with American quality benchmarks to illustrate the utility of providing guidelines for interpretation. What are the implications for practitioners? Collecting and comparing clinical outcome data would enable practitioners to better understand the quality of care being provided and whether practices required review. The Clinical Care Indicator Tool could provide a comprehensive and systematic means of doing this, thus filling a gap in quality monitoring within Australian residential aged care.
Resumo:
Current trends in workforce development indicate the movement of workers within and across occupations to be the norm. In 2009, only one in three vocational education and training (VET) graduates in Australia ended up working in an occupation for which they were trained. This implies that VET enhances the employability of its graduates by equipping them with the knowledge and competencies to work in different occupations and sectors. This paper presents findings from a Government-funded study that examined the occupational mobility of selected associate professional and trades occupations within the Aged Care, Automotive and Civil Construction sectors in Queensland. The study surveyed enrolled nurses and related workers, motor mechanics and civil construction workers to analyse their patterns of occupational mobility, future work intentions, reasons for taking and leaving work, and the factors influencing them to leave or remain in their occupations. This paper also discusses the implications of findings for the training of workers in these sectors and more generally.
Resumo:
Background: In response to the need for more comprehensive quality assessment within Australian residential aged care facilities, the Clinical Care Indicator (CCI) Tool was developed to collect outcome data as a means of making inferences about quality. A national trial of its effectiveness and a Brisbane-based trial of its use within the quality improvement context determined the CCI Tool represented a potentially valuable addition to the Australian aged care system. This document describes the next phase in the CCI Tool.s development; the aims of which were to establish validity and reliability of the CCI Tool, and to develop quality indicator thresholds (benchmarks) for use in Australia. The CCI Tool is now known as the ResCareQA (Residential Care Quality Assessment). Methods: The study aims were achieved through a combination of quantitative data analysis, and expert panel consultations using modified Delphi process. The expert panel consisted of experienced aged care clinicians, managers, and academics; they were initially consulted to determine face and content validity of the ResCareQA, and later to develop thresholds of quality. To analyse its psychometric properties, ResCareQA forms were completed for all residents (N=498) of nine aged care facilities throughout Queensland. Kappa statistics were used to assess inter-rater and test-retest reliability, and Cronbach.s alpha coefficient calculated to determine internal consistency. For concurrent validity, equivalent items on the ResCareQA and the Resident Classification Scales (RCS) were compared using Spearman.s rank order correlations, while discriminative validity was assessed using known-groups technique, comparing ResCareQA results between groups with differing care needs, as well as between male and female residents. Rank-ordered facility results for each clinical care indicator (CCI) were circulated to the panel; upper and lower thresholds for each CCI were nominated by panel members and refined through a Delphi process. These thresholds indicate excellent care at one extreme and questionable care at the other. Results: Minor modifications were made to the assessment, and it was renamed the ResCareQA. Agreement on its content was reached after two Delphi rounds; the final version contains 24 questions across four domains, enabling generation of 36 CCIs. Both test-retest and inter-rater reliability were sound with median kappa values of 0.74 (test-retest) and 0.91 (inter-rater); internal consistency was not as strong, with a Chronbach.s alpha of 0.46. Because the ResCareQA does not provide a single combined score, comparisons for concurrent validity were made with the RCS on an item by item basis, with most resultant correlations being quite low. Discriminative validity analyses, however, revealed highly significant differences in total number of CCIs between high care and low care groups (t199=10.77, p=0.000), while the differences between male and female residents were not significant (t414=0.56, p=0.58). Clinical outcomes varied both within and between facilities; agreed upper and lower thresholds were finalised after three Delphi rounds. Conclusions: The ResCareQA provides a comprehensive, easily administered means of monitoring quality in residential aged care facilities that can be reliably used on multiple occasions. The relatively modest internal consistency score was likely due to the multi-factorial nature of quality, and the absence of an aggregate result for the assessment. Measurement of concurrent validity proved difficult in the absence of a gold standard, but the sound discriminative validity results suggest that the ResCareQA has acceptable validity and could be confidently used as an indication of care quality within Australian residential aged care facilities. The thresholds, while preliminary due to small sample size, enable users to make judgements about quality within and between facilities. Thus it is recommended the ResCareQA be adopted for wider use.