938 resultados para PUBLISHING
Resumo:
Articles > Journals > Health journals > Nutrition & Dietetics: The Journal of the Dieticians Association of Australia articles > March 2003 Article: An assessment of the potential of Family Day Care as a nutrition promotion setting in South Australia. (Original Research). Article from:Nutrition & Dietetics: The Journal of the Dieticians Association of Australia Article date:March 1, 2003 Author:Daniels, Lynne A.; Franco, Bunny; McWhinnie, Julie-Anne CopyrightCOPYRIGHT 2006 Dietitians Association of Australia. This material is published under license from the publisher through the Gale Group, Farmington Hills, Michigan. All inquiries regarding rights or concerns about this content should be directed to customer service. (Hide copyright information) Related articles Ads by Google TAFE Child Care Courses Government accredited courses. Study anytime, anywhere. www.seeklearning.com.au Get Work in Child Care Certificate III Children's Services 4 Day Course + Take Home Assessment HBAconsult.com.au Abstract Objective: To assess the potential role of Family Day Care in nutrition promotion for preschool children. Design and setting: A questionnaire to examine nutrition-related issues and practices was mailed to care providers registered in the southern region of Adelaide, South Australia. Care providers also supplied a descriptive, qualitative recall of the food provided by parents or themselves to each child less than five years of age in their care on the day closest to completion of the questionnaire. Subjects: 255 care providers. The response rate was 63% and covered 643 preschool children, mean 4.6 (SD 2.8) children per carer. Results: There was clear agreement that nutrition promotion was a relevant issue for Family Day Care providers. Nutrition and food hygiene knowledge was good but only 54% of respondents felt confident to address food quality issues with parents. Sixty-five percent of respondents reported non-neutral approaches to food refusal and dawdling (reward, punishment, cajoling) that overrode the child's control of the amount eaten. The food recalls indicated that most children (> 75%) were offered fruit at least once. Depending on the hours in care, (0 to 4, 5 to 8, greater than 8 hours), 20%, 32% and 55%, respectively, of children were offered milk and 65%, 82% and 87%, respectively, of children were offered high fat and sugar foods. Conclusions: Questionnaire responses suggest that many care providers are committed to and proactive in a range of nutrition promotion activities. There is scope for strengthening skills in the management of common problems, such as food refusal and dawdling, consistent with the current evidence for approaches to early feeding management that promote the development of healthy food preferences and eating patterns. Legitimising and empowering care providers in their nutrition promotion role requires clear policies, guide lines, adequate pre- and in-service training, suitable parent materials, and monitoring.
Resumo:
Objective: To evaluate the fruit and vegetable intakes of Australian adults aged 19-64 years. Methods: Intake data were collected as part of the National Nutrition Survey 1995 representing all Australian States and Territories, including city, metropolitan, rural and remote areas. Dietary intake of 8,891 19-to-64 year-olds was assessed using a structured 24-hour recall. Intake frequency was assessed as the proportion of participants consuming fruit and vegetables on the day prior to interview and variety was assessed as the number of subgroups of fruit and vegetables consumed. Intake levels were compared with the recommendations of the Australian Guide to Healthy Eating (AGHE). Results: Sixty-two per cent of participants consumed some fruit and 89% consumed some vegetables on the day surveyed. Males were less likely to consume fruit and younger adults less likely to consume fruit and vegetables compared with females and older adults respectively. Variety was primarily low (1 subcategory) for fruit and medium (3-4 subcategories) for vegetables. Thirty-two per cent of adults consumed the minimum two serves of fruit and 30% consumed the minimum five serves of vegetables as recommended by the AGHE. Eleven per cent of adults met the minimum recommendations for both fruit and vegetables. Conclusion: A large proportion of adults have fruit and vegetable intakes below the AGHE minimum recommendations. Implications: A nationally integrated, longterm campaign to increase fruit and vegetable consumption, supported by policy changes to address structural barriers to consumption, is vital to improve fruit and vegetable consumption among adults
Resumo:
Aim: This paper is a report of a study of variations in the pattern of nurse practitioner work in a range of service fields and geographical locations, across direct patient care, indirect patient care and service-related activities. Background. The nurse practitioner role has been implemented internationally as a service reform model to improve the access and timeliness of health care. There is a substantial body of research into the nurse practitioner role and service outcomes, but scant information on the pattern of nurse practitioner work and how this is influenced by different service models. --------- Methods: We used work sampling methods. Data were collected between July 2008 and January 2009. Observations were recorded from a random sample of 30 nurse practitioners at 10-minute intervals in 2-hour blocks randomly generated to cover two weeks of work time from a sampling frame of six weeks. --------- Results: A total of 12,189 individual observations were conducted with nurse practitioners across Australia. Thirty individual activities were identified as describing nurse practitioner work, and these were distributed across three categories. Direct care accounted for 36.1% of how nurse practitioners spend their time, indirect care accounted for 32.2% and service-related activities made up 31.9%. --------- Conclusion. These findings provide useful baseline data for evaluation of nurse practitioner positions and the service effect of these positions. However, the study also raises questions about the best use of nurse practitioner time and the influences of barriers to and facilitators of this model of service innovation.
Resumo:
Aim This paper is a report of a study conducted to validate an instrument for measuring advanced practice nursing role delineation in an international contemporary health service context using the Delphi technique. Background Although most countries now have clear definitions and competency standards for nurse practitioners, no such clarity exists for many advanced practice nurse roles, leaving healthcare providers uncertain whether their service needs can or should be met by an advanced practice nurse or a nurse practitioner. The validation of a tool depicting advanced practice nursing is essential for the appropriate deployment of advanced practice nurses. This paper is the second in a three-phase study to develop an operational framework for assigning advanced practice nursing roles. Method An expert panel was established to review the activities in the Strong Model of Advanced Practice Role Delineation tool. Using the Delphi technique, data were collected via an on-line survey through a series of iterative rounds in 2008. Feedback and statistical summaries of responses were distributed to the panel until the 75% consensus cut-off was obtained. Results After three rounds and modification of five activities, consensus was obtained for validation of the content of this tool. Conclusion The Strong Model of Advanced Practice Role Delineation tool is valid for depicting the dimensions of practice of the advanced practice role in an international contemporary health service context thereby having the potential to optimize the utilization of the advanced practice nursing workforce.
Resumo:
Internationally, collection of reliable data on new and evolving health-care roles is crucial. We describe a protocol for design and administration of a national census of an emergent health-care role, namely nurse practitioners in Australia using databases held by regulatory authorities. A questionnaire was developed to obtain data on the role and scope of practice of Australian nurse practitioners. Our tool comprised five sections and included a total of 56 questions, using 28 existing items from the National Nursing and Midwifery Labour Force Census and nine items recommended in the Nurse Practitioner Workforce Planning Minimum Data Set. Australian Nurse Registering Authorities (n = 6) distributed the survey on our behalf. This paper outlines our instrument and methods. The survey was administered to 238 authorized Australian nurse practitioners (85% response rate). Rigorous collection of standardized items will ensure health policy is informed by reliable and valid data. We will re-administer the survey 2 years following the first survey to measure change over time.
Resumo:
Construction sector application of Lead Indicators generally and Positive Performance Indicators (PPIs) particularly, are largely seen by the sector as not providing generalizable indicators of safety effectiveness. Similarly, safety culture is often cited as an essential factor in improving safety performance, yet there is no known reliable way of measuring safety culture. This paper proposes that the accurate measurement of safety effectiveness and safety culture is a requirement for assessing safe behaviours, safety knowledge, effective communication and safety performance. Currently there are no standard national or international safety effectiveness indicators (SEIs) that are accepted by the construction industry. The challenge is that quantitative survey instruments developed for measuring safety culture and/ or safety climate are inherently flawed methodologically and do not produce reliable and representative data concerning attitudes to safety. Measures that combine quantitative and qualitative components are needed to provide a clear utility for safety effectiveness indicators.
Resumo:
Purpose: The aim was to document contact lens prescribing trends in Australia between 2000 and 2009. ---------- Methods: A survey of contact lens prescribing trends was conducted each year between 2000 and 2009. Australian optometrists were asked to provide information relating to 10 consecutive contact lens fittings between January and March each year. ---------- Results: Over the 10-year survey period, 1,462 practitioners returned survey forms representing a total of 13,721 contact lens fittings. The mean age (± SD) of lens wearers was 33.2 ± 13.6 years and 65 per cent were female. Between 2006 and 2009, rigid lens new fittings decreased from 18 to one per cent. Low water content lenses reduced from 11.5 to 3.2 per cent of soft lens fittings between 2000 and 2008. Between 2005 and 2009, toric lenses and multifocal lenses represented 26 and eight per cent, respectively, of all soft lenses fitted. Daily disposable, one- to two-week replacement and monthly replacement lenses accounted for 11.6, 30.0 and 46.5 per cent of all soft lens fittings over the survey period, respectively. The proportion of new soft fittings and refittings prescribed as extended wear has generally declined throughout the past decade. Multi-purpose lens care solutions dominate the market. Rigid lenses and monthly replacement soft lenses are predominantly worn on a full-time basis, whereas daily disposable soft lenses are mainly worn part-time.---------- Conclusions: This survey indicates that technological advances, such as the development of new lens materials, manufacturing methods and lens designs, and the availability of various lens replacement options, have had a significant impact on the contact lens market during the first decade of the 21st Century.
Resumo:
Purpose: The aim was to construct and advise on the use of a cost-per-wear model based on contact lens replacement frequency, to form an equitable basis for cost comparison. ---------- Methods: The annual cost of professional fees, contact lenses and solutions when wearing daily, two-weekly and monthly replacement contact lenses is determined in the context of the Australian market for spherical, toric and multifocal prescription types. This annual cost is divided by the number of times lenses are worn per year, resulting in a ‘cost-per-wear’. The model is presented graphically as the cost-per-wear versus the number of times lenses are worn each week for daily replacement and reusable (two-weekly and monthly replacement) lenses.---------- Results: The cost-per-wear for two-weekly and monthly replacement spherical lenses is almost identical but decreases with increasing frequency of wear. The cost-per-wear of daily replacement spherical lenses is lower than for reusable spherical lenses, when worn from one to four days per week but higher when worn six or seven days per week. The point at which the cost-per-wear is virtually the same for all three spherical lens replacement frequencies (approximately AUD$3.00) is five days of lens wear per week. A similar but upwardly displaced (higher cost) pattern is observed for toric lenses, with the cross-over point occurring between three and four days of wear per week (AUD$4.80). Multifocal lenses have the highest price, with cross-over points for daily versus two-weekly replacement lenses at between four and five days of wear per week (AUD$5.00) and for daily versus monthly replacement lenses at three days per week (AUD$5.50).---------- Conclusions: This cost-per-wear model can be used to assist practitioners and patients in making an informed decision in relation to the cost of contact lens wear as one of many considerations that must be taken into account when deciding on the most suitable lens replacement modality.
Resumo:
BACKGROUND: The standard treatment for a non-union of the hallux metatarsophalangeal joint fusion has been to revise the fusion. Revision fusion is technically more demanding, often involving bone grafting, more substantial fixation and prolonged period of immobilization postoperatively. We present data to suggest that removal of hardware and debridement alone is an alternative treatment option. ---------- MATERIALS AND METHODS: A case note review identified patients with a symptomatic non-union after hallux metatarsophalangeal joint (MTPJ) fusion. It is our practice to offer these patients revision fusion or removal of hardware and debridement. For the seven patients that chose hardware removal and were left with a pseudarthrosis, a matched control group was selected from patients who had had successful fusions. Three outcome scores were used. Hallux valgus and dorsiflexion angles were recorded.---------- RESULTS: One hundred thirty-nine hallux MTPJ arthrodeses were carried out. Fourteen non-unions were identified. The rate of non-union in males and following previous hallux MTPJ surgery was 19% and 24%, respectively. In females undergoing a primary MTPJ fusion, the rate was 2.4%. Twelve non-union patients were reviewed at 27 months (mean). Eleven patients had elected to undergo removal of hardware and debridement. Four patients with pseudarthrosis were unhappy with the results and proceeded to either revision fusion or MTPJ replacement. Seven non-union patients, who had removal of hardware alone, had outcome scores marginally worse compared to those with successful fusions.---------- CONCLUSION: Removal of hardware alone is a reasonable option to offer as a relatively minor procedure following a failed arthrodesis of the first MTPJ. This must be accepted on the proviso that in this study four out of 11 (36%) patients proceeded to a revision first MTPJ fusion or first MTPJ replacement. We also found that the rate of non-union in primary first MTPJ fusion was significantly higher in males and those patients who had undergone previous surgery.
Resumo:
Purpose, Design/methodology / approach The acknowledgement of state significance in relation to development projects can result in special treatment by regulatory authorities, particularly in terms of environmental compliance and certain economic and other government support measures. However, defining just what constitutes a “significant project”, or a project of “state significance”, varies considerably between Australian states. In terms of establishing threshold levels, in Queensland there is even less clarity. Despite this lack of definition, the implications of “state significance” can nevertheless be considerable. For example, in Queensland if the Coordinator-General declares a project to be a “significant project” under the State Development and Public Works Organisation Act 1971, the environmental impact assessment process may become more streamlined – potentially circumventing certain provisions under The Integrated Planning Act 1997. If the project is not large enough to be so deemed, an extractive resource under the State Planning Policy 2/07 - Protection of Extractive Resources 2007 may be considered to be of State or regional significance and subsequently designated as a “Key Resource Area”. As a consequence, such a project is afforded some measure of resource protection but remains subject to the normal assessment process under the Integrated Development Assessment System, as well as the usual requirements of the vegetation management codes, and other regulations. Findings (Originality/value) & Research limitations / implications This paper explores the various meanings of “state significance” in Queensland and the ramifications for development projects in that state. It argues for a streamlining of the assessment process in order to avoid or minimise constraints acting on the state’s development. In so doing, it questions the existence of a strategic threat to the delivery of an already over-stretched infrastructure program.
Resumo:
Purpose – The paper aims to explore the key competitiveness indicators (KCIs) that provide the guidelines for helping new real estate developers (REDs) achieve competitiveness during their inception stage in which the organisations start their business. Design/methodology/approach – The research was conducted using a combination of various methods. A literature review was undertaken to provide a proper theoretical understanding of organisational competitiveness within RED's activities and developed a framework of competitiveness indicators (CIs) for REDs. The Delphi forecasting method is employed to investigate a group of 20 experts' perception on the relative importance between CIs. Findings – The results show that the KCIs of new REDs are capital operation capability, entrepreneurship, land reserve capability, high sales revenue from the first real estate development project, and innovation capability. Originality/value – The five KCIs of new REDs are new. In practical terms, the examination of these KCIs would help the business managers of new REDs to effectively plan their business by focusing their efforts on these key indicators. The KCIs can also help REDs provide theoretical constructs of the knowledge base on organisational competitiveness from a dynamic perspective, and assist in providing valuable experiences and in formulating feasible strategies for survival and growth.
Resumo:
Background Leisure-time physical activity (LTPA) shows promise for reducing the risk of poor mental health in later life, although gender- and age-specific research is required to clarify this association. This study examined the concurrent and prospective relationships between both LTPA and walking with mental health in older women. Methods Community-dwelling women aged 73–78 years completed mailed surveys in 1999, 2002 and 2005 for the Australian Longitudinal Study on Women's Health. Respondents reported their weekly minutes of walking, moderate LTPA and vigorous LTPA. Mental health was defined as the number of depression and anxiety symptoms, as assessed with the Goldberg Anxiety and Depression Scale (GADS). Multivariable linear mixed models, adjusted for socio-demographic and health-related variables, were used to examine associations between five levels of LTPA (none, very low, low, intermediate and high) and GADS scores. For women who reported walking as their only LTPA, associations between walking and GADS scores were also examined. Women who reported depression or anxiety in 1999 were excluded, resulting in data from 6653 women being included in these analyses. Results Inverse dose–response associations were observed between both LTPA and walking with GADS scores in concurrent and prospective models (p<0.001). Even low levels of LTPA and walking were associated with lowered scores. The lowest scores were observed in women reporting high levels of LTPA or walking. Conclusion The results support an inverse dose–response association between both LTPA and walking with mental health, over 3 years in older women without depression or anxiety.
Resumo:
The relationship between organic matter (OM) lability and temperature sensitivity is disputed, with recent observations suggesting that responses of relatively more resistant OM to increased temperature could be greater than, equivalent to, or less than responses of relatively more labile OM. This lack of clear understanding limits the ability to forecast carbon (C) cycle responses to temperature changes. Here, we derive a novel approach (denoted Q(10-q)) that accounts for changes in OM quality during decomposition and use it to analyze data from three independent sources. Results from new laboratory soil incubations (labile Q(10-q)=2.1 +/- 0.2; more resistant Q(10-q)=3.8 +/- 0.3) and reanalysis of data from other soil incubations reported in the literature (labile Q(10-q)=2.3; more resistant Q(10-q)=3.3) demonstrate that temperature sensitivity of soil OM decomposition increases with decreasing soil OM lability. Analysis of data from a cross-site, field litter bag decomposition study (labile Q(10-q)=3.3 +/- 0.2; resistant Q(10-q)=4.9 +/- 0.2) shows that litter OM follows the same pattern, with greater temperature sensitivity for more resistant litter OM. Furthermore, the initial response of cultivated soils, presumably containing less labile soil OM (Q(10-q)=2.4 +/- 0.3) was greater than that for undisturbed grassland soils (Q(10-q)=1.7 +/- 0.1). Soil C losses estimated using this approach will differ from previous estimates as a function of the magnitude of the temperature increase and the proportion of whole soil OM comprised of compounds sensitive to temperature over that temperature range. It is likely that increased temperature has already prompted release of significant amounts of C to the atmosphere as CO2. Our results indicate that future losses of litter and soil C may be even greater than previously supposed.
Resumo:
The current paradigm in soil organic matter (SOM) dynamics is that the proportion of biologically resistant SOM will increase when total SOM decreases. Recently, several studies have focused on identifying functional pools of resistant SOM consistent with expected behaviours. Our objective was to combine physical and chemical approaches to isolate and quantify biologically resistant SOM by applying acid hydrolysis treatments to physically isolated silt- and clay-sized soil fractions. Microaggegrate-derived and easily dispersed silt- and clay-sized fractions were isolated from surface soil samples collected from six long-term agricultural experiment sites across North America. These fractions were hydrolysed to quantify the non-hydrolysable fraction, which was hypothesized to represent a functional pool of resistant SOM. Organic C and total N concentrations in the four isolated fractions decreased in the order: native > no-till > conventional-till at all sites. Concentrations of non-hydrolysable C (NHC) and N (NHN) were strongly correlated with initial concentrations, and C hydrolysability was found to be invariant with management treatment. Organic C was less hydrolysable than N, and overall, resistance to acid hydrolysis was greater in the silt-sized fractions compared with the clay-sized fractions. The acid hydrolysis results are inconsistent with the current behaviour of increasing recalcitrance with decreasing SOM content: while %NHN was greater in cultivated soils compared with their native analogues, %NHC did not increase with decreasing total organic C concentrations. The analyses revealed an interaction between biochemical and physical protection mechanisms that acts to preserve SOM in fine mineral fractions, but the inconsistency of the pool size with expected behaviour remains to be fully explained.
Resumo:
No-tillage (NT) management has been promoted as a practice capable of offsetting greenhouse gas (GHG) emissions because of its ability to sequester carbon in soils. However, true mitigation is only possible if the overall impact of NT adoption reduces the net global warming potential (GWP) determined by fluxes of the three major biogenic GHGs (i.e. CO2, N2O, and CH4). We compiled all available data of soil-derived GHG emission comparisons between conventional tilled (CT) and NT systems for humid and dry temperate climates. Newly converted NT systems increase GWP relative to CT practices, in both humid and dry climate regimes, and longer-term adoption (>10 years) only significantly reduces GWP in humid climates. Mean cumulative GWP over a 20-year period is also reduced under continuous NT in dry areas, but with a high degree of uncertainty. Emissions of N2O drive much of the trend in net GWP, suggesting improved nitrogen management is essential to realize the full benefit from carbon storage in the soil for purposes of global warming mitigation. Our results indicate a strong time dependency in the GHG mitigation potential of NT agriculture, demonstrating that GHG mitigation by adoption of NT is much more variable and complex than previously considered, and policy plans to reduce global warming through this land management practice need further scrutiny to ensure success.