147 resultados para critical period for weed control
Resumo:
It has been known since Rhodes Fairbridge’s first attempt to establish a global pattern of Holocene sea-level change by combining evidence from Western Australia and from sites in the northern hemisphere that the details of sea-level history since the Last Glacial Maximum vary considerably across the globe. The Australian region is relatively stable tectonically and is situated in the ‘far-field’ of former ice sheets. It therefore preserves important records of post-glacial sea levels that are less complicated by neotectonics or glacio-isostatic adjustments. Accordingly, the relative sea-level record of this region is dominantly one of glacio-eustatic (ice equivalent) sea-level changes. The broader Australasian region has provided critical information on the nature of post-glacial sea level, including the termination of the Last Glacial Maximum when sea level was approximately 125 m lower than present around 21,000–19,000 years BP, and insights into meltwater pulse 1A between 14,600 and 14,300 cal. yr BP. Although most parts of the Australian continent reveals a high degree of tectonic stability, research conducted since the 1970s has shown that the timing and elevation of a Holocene highstand varies systematically around its margin. This is attributed primarily to variations in the timing of the response of the ocean basins and shallow continental shelves to the increased ocean volumes following ice-melt, including a process known as ocean siphoning (i.e. glacio-hydro-isostatic adjustment processes). Several seminal studies in the early 1980s produced important data sets from the Australasian region that have provided a solid foundation for more recent palaeo-sea-level research. This review revisits these key studies emphasising their continuing influence on Quaternary research and incorporates relatively recent investigations to interpret the nature of post-glacial sea-level change around Australia. These include a synthesis of research from the Northern Territory, Queensland, New South Wales, South Australia and Western Australia. A focus of these more recent studies has been the re-examination of: (1) the accuracy and reliability of different proxy sea-level indicators; (2) the rate and nature of post-glacial sea-level rise; (3) the evidence for timing, elevation, and duration of mid-Holocene highstands; and, (4) the notion of mid- to late Holocene sea-level oscillations, and their basis. Based on this synthesis of previous research, it is clear that estimates of past sea-surface elevation are a function of eustatic factors as well as morphodynamics of individual sites, the wide variety of proxy sea-level indicators used, their wide geographical range, and their indicative meaning. Some progress has been made in understanding the variability of the accuracy of proxy indicators in relation to their contemporary sea level, the inter-comparison of the variety of dating techniques used and the nuances of calibration of radiocarbon ages to sidereal years. These issues need to be thoroughly understood before proxy sea-level indicators can be incorporated into credible reconstructions of relative sea-level change at individual locations. Many of the issues, which challenged sea-level researchers in the latter part of the twentieth century, remain contentious today. Divergent opinions remain about: (1) exactly when sea level attained present levels following the most recent post-glacial marine transgression (PMT); (2) the elevation that sea-level reached during the Holocene sea-level highstand; (3) whether sea-level fell smoothly from a metre or more above its present level following the PMT; (4) whether sea level remained at these highstand levels for a considerable period before falling to its present position; or (5) whether it underwent a series of moderate oscillations during the Holocene highstand.
Resumo:
Background: Side effects of the medications used for procedural sedation and analgesia in the cardiac catheterisation laboratory are known to cause impaired respiratory function. Impaired respiratory function poses considerable risk to patient safety as it can lead to inadequate oxygenation. Having knowledge about the conditions that predict impaired respiratory function prior to the procedure would enable nurses to identify at-risk patients and selectively implement intensive respiratory monitoring. This would reduce the possibility of inadequate oxygenation occurring. Aim: To identify pre-procedure risk factors for impaired respiratory function during nurse-administered procedural sedation and analgesia in the cardiac catheterisation laboratory. Design: Retrospective matched case–control. Methods: 21 cases of impaired respiratory function were identified and matched to 113 controls from a consecutive cohort of patients over 18 years of age. Conditional logistic regression was used to identify risk factors for impaired respiratory function. Results: With each additional indicator of acute illness, case patients were nearly two times more likely than their controls to experience impaired respiratory function (OR 1.78; 95% CI 1.19–2.67; p = 0.005). Indicators of acute illness included emergency admission, being transferred from a critical care unit for the procedure or requiring respiratory or haemodynamic support in the lead up to the procedure. Conclusion: Several factors that predict the likelihood of impaired respiratory function were identified. The results from this study could be used to inform prospective studies investigating the effectiveness of interventions for impaired respiratory function during nurse-administered procedural sedation and analgesia in the cardiac catheterisation laboratory.
Resumo:
The nature and characteristics of how learners learn today are changing. As technology use in learning and teaching continues to grow, its integration to facilitate deep learning and critical thinking becomes a primary consideration. The implications for learner use, implementation strategies, design of integration frameworks and evaluation of their effectiveness in learning environments cannot be overlooked. This study specifically looked at the impact that technology-enhanced learning environments have on different learners’ critical thinking in relation to eductive ability, technological self-efficacy, and approaches to learning and motivation in collaborative groups. These were explored within an instructional design framework called CoLeCTTE (collaborative learning and critical thinking in technology-enhanced environments) which was proposed, revised and used across three cases. The field of investigation was restricted to three key questions: 1) Do learner skill bases (learning approach and eductive ability) influence critical thinking within the proposed CoLeCTTE framework? If so, how?; 2) Do learning technologies influence the facilitation of deep learning and critical thinking within the proposed CoLeCTTE framework? If so, how?; and 3) How might learning be designed to facilitate the acquisition of deep learning and critical thinking within a technology-enabled collaborative environment? The rationale, assumptions and method of research for using a mixed method and naturalistic case study approach are discussed; and three cases are explored and analysed. The study was conducted at the tertiary level (undergraduate and postgraduate) where participants were engaged in critical technical discourse within their own disciplines. Group behaviour was observed and coded, attributes or skill bases were measured, and participants interviewed to acquire deeper insights into their experiences. A progressive case study approach was used, allowing case investigation to be implemented in a "ladder-like" manner. Cases 1 and 2 used the proposed CoLeCTTE framework with more in-depth analysis conducted for Case 2 resulting in a revision of the CoLeCTTE framework. Case 3 used the revised CoLeCTTE framework and in-depth analysis was conducted. The findings led to the final version of the framework. In Cases 1, 2 and 3, content analysis of group work was conducted to determine critical thinking performance. Thus, the researcher used three small groups where learner skill bases of eductive ability, technological self-efficacy, and approaches to learning and motivation were measured. Cases 2 and 3 participants were interviewed and observations provided more in-depth analysis. The main outcome of this study is analysis of the nature of critical thinking within collaborative groups and technology-enhanced environments positioned in a theoretical instructional design framework called CoLeCTTE. The findings of the study revealed the importance of the Achieving Motive dimension of a student’s learning approach and how direct intervention and strategies can positively influence critical thinking performance. The findings also identified factors that can adversely affect critical thinking performance and include poor learning skills, frustration, stress and poor self-confidence, prioritisations over learning; and inadequate appropriation of group role and tasks. These findings are set out as instructional design guidelines for the judicious integration of learning technologies into learning and teaching practice for higher education that will support deep learning and critical thinking in collaborative groups. These guidelines are presented in two key areas: technology and tools; and activity design, monitoring, control and feedback.
Resumo:
This study examined the beliefs underlying people’s decision-making, from a theory of planned behaviour (TPB) framework, in the prediction of curbside household waste recycling. Community members in Brisbane, Australia (N = 148) completed a questionnaire assessing the belief based TPB measures of attitudinal beliefs (costs and benefits), normative beliefs (important referents), and control beliefs (barriers) in relation to engaging in curbside household waste recycling for a 2-week period. Two weeks later, participants completed self report measures of recycling behaviour for the previous fortnight. The results revealed that the attitudinal, normative, and control beliefs for people who performed higher and lower levels of recycling differed significantly. A regression analysis identified both normative and control beliefs as the main determinants of recycling behaviour. For normative beliefs, high level recyclers perceived more approval from referents such as partners, friends, and neighbours to recycle all eligible materials. In addition, the strong results for control beliefs indicated that barriers such as forgetfulness, lack of time, and laziness were rated as more likely to hamper optimal recycling performance for low level recyclers. These findings provide important applied information about beliefs to target in the development of future community recycling campaigns.
Resumo:
We investigated critical belief-based targets for promoting the introduction of solid foods to infants at six months. First-time mothers (N = 375) completed a Theory of Planned Behaviour belief-based questionnaire and follow-up questionnaire assessing the age the infant was first introduced to solids. Normative beliefs about partner/spouse (β = 0.16) and doctor (β = 0.22), and control beliefs about commercial baby foods available for infants before six months (β = −0.20), predicted introduction of solids at six months. Intervention programs should target these critical beliefs to promote mothers’ adherence to current infant feeding guidelines to introduce solids at around six months.
Resumo:
Australian higher education is presently subject to a period of substantial change. The needs of the economy and workforce, together with the broader educational role of the university are leading to focus on lifelong learning as a tool for bringing together the apparently diverging needs of different groups. Within this broader context, the emphasis on lifelong learning and associated graduate capabilities is leading to opportunities for new partnerships between faculty and librarians, partnerships that bring the two groups together in ways that are helping to transform the experience of teaching and learning. This paper explores emerging partnerships in diverse areas, including research and scholarship, curriculum, policy, supervision, and staff development. They are in the early phases of development and result from a broad focus on the learning and information literacy needs of students, as opposed to a narrow focus on using the library and its information resources. Taken together, and viewed from a system-wide perspective, these partnerships reveal a complex dynamic that is deserving of wider attention across the Australian higher education system and internationally.
Resumo:
The development of toll roads in Indonesia started around 1978. Initially, the management and development of toll roads sat directly under the Government of Indonesia (GoI) being undertaken through PT JasaMarga, a state owned enterprise specifically established to provide toll roads. Due to the slow growth and low capability of toll roads to fulfil infrastructure needs in the first ten years of operation (only 2.688kms/year), GoI changed its strategy in 1989 to one of using private sector participation for roads delivery through a Public Private Partnership (PPP) scheme. In this latter period, PT JasaMarga had two roles, both as regulator on behalf of the private sector as well as being the operator. However, from 1989 to 2004 the growth rate of toll roads actually decreased further to 2.300kms/year. Facing this challenge of low growth rate of toll roads, in 2004GoI changed the toll road management system and the role of regulator was returned to the Government through the establishment of the Toll Road Regulatory Agency (BPJT). GoI also amended the institutional framework to strengthen the toll road management system. Despite the introduction of this new institutional framework, the growth of toll roads still showed insignificant change. This problem in toll road development has generated an urgent need for research into this issue. The aim of the research is to understand the performance of the new institutional framework in enhancing PPP procured toll road development. The methodology of the research was to undertake a questionnaire survey distributed to private sector respondents involved in toll road development. The results of this study show that there are several problems inherent in the institutional framework, but the most significant problem comes from the uncertainty of the function of the strategic executive body in the land expropriation process.
Resumo:
Exposure to ultrafine particles (UFPs) is deemed to be a major risk affecting human health. Therefore, airborne particle studies were performed in the recent years to evaluate the most critical micro-environments, as well as identifying the main UFP sources. Nonetheless, in order to properly evaluate the UFP exposure, personal monitoring is required as the only way to relate particle exposure levels to the activities performed and micro-environments visited. To this purpose, in the present work, the results of experimental analysis aimed at showing the effect of the time-activity patterns on UFP personal exposure are reported. In particular, 24 non-smoking couples (12 during winter and summer time, respectively), comprised of a man who worked full-time and a woman who was a homemaker, were analyzed using personal particle counter and GPS monitors. Each couple was investigated for a 48-h period, during which they also filled out a diary reporting the daily activities performed. Time activity patterns, particle number concentration exposure and the related dose received by the participants, in terms of particle alveolar-deposited surface area, were measured. The average exposure to particle number concentration was higher for women during both summer and winter (Summer: women 1.8×104 part. cm-3; men 9.2×103 part. cm-3; Winter: women 2.9×104 part. cm-3; men 1.3×104 part. cm-3), which was likely due to the time spent undertaking cooking activities. Staying indoors after cooking also led to higher alveolar-deposited surface area dose for both women and men during the winter time (9.12×102 and 6.33×102 mm2, respectively), when indoor ventilation was greatly reduced. The effect of cooking activities was also detected in terms of women’s dose intensity (dose per unit time), being 8.6 and 6.6 in winter and summer, respectively. On the contrary, the highest dose intensity activity for men was time spent using transportation (2.8 in both winter and summer).
Resumo:
Nitrous oxide emissions from intensive, fertilised agricultural systems have been identified as significant contributors to both Australia's and the global greenhouse gas (GHG) budget. This is expected to increase as rates of agriculture intensification and land use change accelerate to support population growth and food production. Limited data exists on N2O trace gas fluxes from subtropical or tropical tree cropping soils critical for the development of effective mitigation strategies.This study aimed to quantify GHG emissions over two consecutive years (March 2007 to March 2009) from a 30 year (lychee) orchard in the humid subtropical region of Australia. GHG fluxes were measured using a combination of high temporal resolution automated sampling and manually sampled chambers. No fertiliser was added to the plots during the 2007 measurement season. A split application of nitrogen fertiliser (urea) was added at the rate of 265kgNha-1 during the autumn and spring of 2008. Emissions of N2O were influenced by rainfall events and seasonal temperatures during 2007 and the fertilisation events in 2008. Annual N2O emissions from the lychee canopy increased from 1.7kgN2O-Nha-1yr-1 for 2007, to 7.6kgN2O-Nha-1yr-1 following fertiliser application in 2008. This represented an emission factor of 1.56%, corrected for background emissions. The timing of the split application was found to be critical to N2O emissions, with over twice as much lost following an application in spring (2.44%) compared to autumn (EF: 1.10%). This research suggests that avoiding fertiliser application during the hot and moist spring/summer period can reduce N2O losses without compromising yields.
Resumo:
Food is a multidimensional construct. It has social, cultural, economic, psychological, emotional, biological, and political dimensions. It is both a material object and a catalyst for a range of social and cultural action. Richly implicated in the social and cultural milieu, food is a central marker of culture and society. Yet little is known about the messages and knowledges in the school curriculum about food. Popular debates around food in schools are largely connected with biomedical issues of obesity, exercise and nutrition. This is a study of the sociological dimensions of food-related messages, practices and knowledge formations in the primary school curriculum. It uses an exploratory, qualitative case study methodology to identify and examine the food activities of a Year 5 class in a Queensland school. Data was gathered over a twoyear period using observation, documentation and interviews methods. Food was found to be an integral part of the primary school's activity. It had economic, symbolic, pedagogic, and instrumental value. Messages about food were found in the official, enacted and hidden curricular which were framed by a food governance framework of legislation, procedures and norms. In the school studied, food knowledge was commodified as a part of a political economy that centred on an 'eat more' message. Certain foods were privileged over others while myths about energy, fruit, fruit juice and sugar shaped student dispositions, values, norms and action. There was little engagement with the cognitive and behavioural dimensions of food and nutrition. The thesis concludes with recommendations for a whole scale reconsideration of food in schools as curricular content and knowledge.
Resumo:
Information privacy is a critical success/failure factor in information technology supported healthcare (eHealth). eHealth systems utilise electronic health records (EHR) as the main source of information, thus, implementing appropriate privacy preserving methods for EHRs is vital for the proliferation of eHealth. Whilst information privacy may be a fundamental requirement for eHealth consumers, healthcare professionals demand non-restricted access to patient information for improved healthcare delivery, thus, creating an environment where stakeholder requirements are contradictory. Therefore, there is a need to achieve an appropriate balance of requirements in order to build successful eHealth systems. Towards achieving this balance, a new genre of eHealth systems called Accountable-eHealth (AeH) systems has been proposed. In this paper, an access control model for EHRs is presented that can be utilised by AeH systems to create information usage policies that fulfil both stakeholders’ requirements. These policies are used to accomplish the aforementioned balance of requirements creating a satisfactory eHealth environment for all stakeholders. The access control model is validated using a Web based prototype as a proof of concept.
Resumo:
Background: Critical care units are designed and resourced to save lives, yet the provision of end-of-life care is a significant component of nursing work in these settings. Limited research has investigated the actual practices of critical care nurses in the provision of end-of-life care, or the factors influencing these practices. To improve the care that patients at the end of life and their families receive, and to support nurses in the provision of this care, further research is needed. The purpose of this study was to identify critical care nurses' end-of-life care practices, the factors influencing the provision of end-of-life care and the factors associated with specific end-of-life care practices. Methods: A three-phase exploratory sequential mixed-methods design was utilised. Phase one used a qualitative approach involving interviews with a convenience sample of five intensive care nurses to identify their end-of-life care experiences and practices. In phase two, an online survey instrument was developed, based on a review of the literature and the findings of phase one. The survey instrument was reviewed by six content experts and pilot tested with a convenience sample of 28 critical care nurses (response rate 45%) enrolled in a postgraduate critical care nursing subject. The refined survey instrument was used in phase three of this study to conduct a national survey of critical care nurses. Descriptive analyses, exploratory factor analysis and univariate general linear modelling was undertaken on completed survey responses from 392 critical care nurses (response rate 25%). Results: Six end-of-life care practice areas were identified in this study: information sharing, environmental modification, emotional support, patient and family-centred decision making, symptom management and spiritual support. The items most frequently identified as always undertaken by critical care nurses in the provision of end-of-life care were from the information sharing and environmental modification practice areas. Items least frequently identified as always undertaken included items from the emotional support practice area. Eight factors influencing the provision of end-of-life care were identified: palliative values, patient and family preferences, knowledge, preparedness, organisational culture, resources, care planning, and emotional support for nurses. Strong agreement was noted with items reflecting values consistent with a palliative approach and inclusion of patient and family preferences. Variation was noted in agreement for items regarding opportunities for knowledge acquisition in the workplace and formal education, yet most respondents agreed that they felt adequately prepared. A context of nurse-led practice was identified, with variation in access to resources noted. Collegial support networks were identified as a source of emotional support for critical care nurses. Critical care nurses reporting values consistent with a palliative approach and/or those who scored higher on support for patient and family preferences were more likely to be engaged in end-of-life care practice areas identified in this study. Nurses who reported higher levels of preparedness and access to opportunities for knowledge acquisition were more likely to report engaging in interpersonal practices that supported patient and family centred decision making and emotional support of patients and their families. A negative relationship was identified between the explanatory variables of emotional support for nurses and death anxiety, and the patient and family centred decision making practice area. Contextual factors had a limited influence as explanatory variables of specific end-of-life care practice areas. Gender was identified as a significant explanatory variable in the emotional and spiritual support practice areas, with male gender associated with lower summated scores on these practice scales. Conclusions: Critical care nurses engage in practices to share control with and support inclusion of families experiencing death and dying. The most frequently identified end-of-life care practices were those that are easily implemented, practical strategies aimed at supporting the patient at the end of life and the patient's family. These practices arguably require less emotional engagement by the nurse. Critical care nurses' responses reflected values consistent with a palliative approach and a strong commitment to the inclusion of families in end-of-life care, and these factors were associated with engagement in all end-of-life care practice areas. Perceived preparedness or confidence with the provision of end-of-life care was associated with engagement in interpersonal caring practices. Critical care nurses autonomously engage in the provision of end-of-life care within the constraints of an environment designed for curative care and rely on their colleagues for emotional support. Critical care nurses must be adequately prepared and supported to provide comprehensive care in all areas of end-of-life care practice. The findings of this study raise important implications, and informed recommendations for practice, education and further research.
Resumo:
Introduction. The purpose of this chapter is to address the question raised in the chapter title. Specifically, how can models of motor control help us understand low back pain (LBP)? There are several classes of models that have been used in the past for studying spinal loading, stability, and risk of injury (see Reeves and Cholewicki (2003) for a review of past modeling approaches), but for the purpose of this chapter we will focus primarily on models used to assess motor control and its effect on spine behavior. This chapter consists of 4 sections. The first section discusses why a shift in modeling approaches is needed to study motor control issues. We will argue that the current approach for studying the spine system is limited and not well-suited for assessing motor control issues related to spine function and dysfunction. The second section will explore how models can be used to gain insight into how the central nervous system (CNS) controls the spine. This segues segue nicely into the next section that will address how models of motor control can be used in the diagnosis and treatment of LBP. Finally, the last section will deal with the issue of model verification and validity. This issue is important since modelling accuracy is critical for obtaining useful insight into the behavior of the system being studied. This chapter is not intended to be a critical review of the literature, but instead intended to capture some of the discussion raised during the 2009 Spinal Control Symposium, with some elaboration on certain issues. Readers interested in more details are referred to the cited publications.
Resumo:
Maternal depression is a known risk factor for poor outcomes for children. Pathways to these poor outcomes relate to reduced maternal responsiveness or sensitivity to the child. Impaired responsiveness potentially impacts the feeding relationship and thus may be a risk factor for inappropriate feeding practices. The aim of this study was to examine the longitudinal relationships between self-reported maternal post-natal depressive symptoms at child age 4 months and feeding practices at child age 2 years in a community sample. Participants were Australian first-time mothers allocated to the control group of the NOURISH randomized controlled trial when infants were 4 months old. Complete data from 211 mothers (of 346 allocated) followed up when their children were 2 years of age (51% girls) were available for analysis. The relationship between Edinburgh Postnatal Depression Scale (EPDS) score (child age 4 months) and child feeding practices (child age 2 years) was tested using hierarchical linear regression analysis adjusted for maternal and child characteristics. Higher EPDS score was associated with less responsive feeding practices at child age 2 years: greater pressure [β = 0.18, 95% confidence interval (CI): 0.04–0.32, P = 0.01], restriction (β = 0.14, 95% CI: 0.001–0.28, P = 0.05), instrumental (β = 0.14, 95% CI: 0.005–0.27, P = 0.04) and emotional (β = 0.15, 95% CI: 0.01–0.29, P = 0.03) feeding practices (ΔR2 values: 0.02–0.03, P < 0.05). This study provides evidence for the proposed link between maternal post-natal depressive symptoms and lower responsiveness in child feeding. These findings suggest that the provision of support to mothers experiencing some levels of depressive symptomatology in the early post-natal period may improve responsiveness in the child feeding relationship.
Resumo:
Dementia is an irreversible and incurable syndrome that leads to progressive impairment of cognitive functions and behavioural and psychological symptoms such as agitation, depression and psychosis. Appropriate environmental conditions can help delay its onset and progression, and indoor environmental (IE) factors have a major impact. However, there is no firm understanding of the full range of relevant IE factors and their impact levels. This paper describes a preliminary study to investigate the effects of IE on Hong Kong residential care homes (RCH) dementia residents. This involved six purposively selected focus groups, each comprising the main stakeholders of the dementia residents’ caregivers, RCH staff and/or registered nurses, and architects. Using the Critical Incident Technique, the main context and experiences of behavioural problems of dementia residents caused by IE were explored and the key causal RCH IE quality factors identified, together with the associated responses and stress levels involved. The findings indicate that the acoustic environment, lighting and thermal environment are the most important influencing factors. Many of the remedies provided by the focus groups are quite simple to carry out and are summarised in the form of recommendations to current RCHs providers and users. The knowledge acquired in this initial study will help enrich the knowledge of IE design for dementiaspecific residential facilities. It also provides some preliminary insights for healthcare policymakers and practitioners in the building design/facilities management and dementia-care sectors into the IE factors contributing to a more comfortable, healthy and sustainable RCH living environment in Hong Kong.