112 resultados para 59-451
Resumo:
The purpose of this chapter is to provide an overview of the development and use of clinical guidelines as a tool for decision making in clinical practice. Nurses have always developed and used tools to guide clinical decision making related to interventions in practice. Since Florence Nightingale (Nightingale 1860) gave us ‘notes’ on nursing in the late 1800s, nurses have continued to use tools, such as standards, policies and procedures, protocols, algorithms, clinical pathways and clinical guidelines, to assist them in making appropriate decisions about patient care that eventuate in the best desired patient outcomes. Clinical guidelines have enjoyed growing popularity as a comprehensive tool for synthesising clinical evidence and information into user-friendly recommendations for practice. Historically, clinical guidelines were developed by individual experts or groups of experts by consensus, with no transparent process for the user to determine the validity and reliability of the recommendations. The acceptance of the evidence-based practice (EBP) movement as a paradigm for clinical decision making underscores the imperative for clinical guidelines to be systematically developed and based on the best available research evidence. Clinicians are faced with the dilemma of choosing from an abundance of guidelines of variable quality, or developing new guidelines. Where do you start? How do you find an existing guideline to fit your practice? How do you know if a guideline is evidence-based, valid and reliable? Should you apply an existing guideline in your practice or develop a new guideline? How do you get clinicians to use the guidelines? How do you know if using the guideline will make any difference in care delivery or patient outcomes? Whatever the choice, the challenge lies in choosing or developing a clinical guideline that is credible as a decision-making tool for the delivery of quality, efficient and effective care. This chapter will address the posed questions through an exploration of the ins and outs of clinical guidelines, from development to application to evaluation.
Resumo:
This naturalistic study investigated the mechanisms of change in measures of negative thinking and in 24-h urinary metabolites of noradrenaline (norepinephrine), dopamine and serotonin in a sample of 43 depressed hospital patients attending an eight-session group cognitive behavior therapy program. Most participants (91%) were taking antidepressant medication throughout the therapy period according to their treating Psychiatrists' prescriptions. The sample was divided into outcome categories (19 Responders and 24 Non-responders) on the basis of a clinically reliable change index [Jacobson, N.S., & Truax, P., 1991. Clinical significance: a statistical approach to defining meaningful change in psychotherapy research. Journal of Consulting and Clinical Psychology, 59, 12–19.] applied to the Beck Depression Inventory scores at the end of the therapy. Results of repeated measures analysis of variance [ANOVA] analyses of variance indicated that all measures of negative thinking improved significantly during therapy, and significantly more so in the Responders as expected. The treatment had a significant impact on urinary adrenaline and metadrenaline excretion however, these changes occurred in both Responders and Non-responders. Acute treatment did not significantly influence the six other monoamine metabolites. In summary, changes in urinary monoamine levels during combined treatment for depression were not associated with self-reported changes in mood symptoms.
Resumo:
Proteases with important roles for bacterial pathogens which specifically reside within intracellular vacuoles are frequently homologous to those which have important virulence functions for other bacteria. Research has identified that some of these conserved proteases have evolved specialised functions for intracellular vacuole residing bacteria. Unique proteases with pathogenic functions have also been described from Chlamydia, Mycobacteria, and Legionella. These findings suggest that there are further novel functions for proteases from these bacteria which remain to be described. This review summarises recent findings of novel protease functions from the intracellular human pathogenic bacteria which reside exclusively in vacuoles.
Resumo:
A conversation over a cup of coffee in late 2005 between Australasian Compliance Institute members Bill Dee and Dr Len Gainsford quickly turned to previously unsuccessful attempts to start a professional journal about compliance. There were two main issue - the difficultly in getting a professional journal off the ground and then sustaining a continuous flow of quality contributions. As practitioners, Bill and Len knew there was a considerable amount of 'thought and practice leadership' compliance material out there but they also knew that such material had not been presented in a relevant and interesting way. A foolish thought arose - could we start a professional journal that practitioners might actually read and use?
Resumo:
Poly(L-lactide-co-succinic anhydride) networks were synthesised via the carbodiimide-mediated coupling of poly(L-lactide) (PLLA) star polymers. When 4-(dimethylamino)pyridine (DMAP) alone was used as the catalyst gelation did not occur. However, when 4-(dimethylamino)pyridinium p-toluenesulfonate (DPTS), the salt of DMAP and p-toluenesulfonic acid (PTSA), was the catalyst, the networks obtained had gel fractions comparable to those which were reported for networks synthesised by conventional methods. Greater gel fractions and conversion of the prepolymer terminal hydroxyl groups were observed when the hydroxyl-terminated star prepolymers reacted with succinic anhydride in a one-pot procedure than when the hydroxyl-terminated star prepolymers reacted with presynthesised succinic-terminated star prepolymers. The thermal properties of the networks, glass transition temperature (Tg), melting temperature (Tm) and crystallinity (Xc) were all strongly influenced by the average molecular weights between the crosslinks ((M_c). The network with the smallest (M_c )(1400 g/mol) was amorphous and had a Tg of 59 °C while the network with the largest (M_c ) (7800 g/mol) was 15 % crystalline and had a Tg of 56 °C.
Resumo:
We analyse the puzzling behavior of the volatility of individual stock returns around the turn of the Millennium. There has been much academic interest in this topic, but no convincing explanation has arisen. Our goal is to pull together the many competing explanations currently proposed in the literature to delermine which, if any, are capable of explaining the volatility trend. We find that many of the different explanations capture the same unusual trend around the Millennium. We find that many of the variables are very highly correlated and it is thus difficult to disentangle their relalive ability to exlplain the time-series behavior in volatility. It seems thai all of the variables that track average volatility well do so mainly by capturing changes in the post-1994 period. These variables have no time-series explanatory power in the pre-1995 years, questioning the underlying idea that any of the explanations currently plesented in the literature can track the trend in volatility over long periods.
Resumo:
Computer systems have become commonplace in most SMEs and technology is increasingly becoming a part of doing business. In recent years, the Internet has become readily available to businesses; consequently there has been growing pressure on SMEs to take up e-commerce. However, e-commerce is perceived by many as being unproven in terms of business benefit. This research aims to determine what, if any, benefits are derived from assimilating e-commerce technologies into SME business processes. This paper presents three in-depth case studies from the Real Estate industry in a regional setting. Overall, findings were positive and identified the following experiences: enhanced business efficiencies, cost benefits, improved customer interactions and increased business return on investment.
Resumo:
Public knowledge and beliefs about injury prevention are currently poorly understood. A total of 1030 residents in the State of Queensland, Australia responded to questions about injury prevention in or around the home, on the roads, in or on the water, at work, deliberate injury, and responsibility for preventing deliberate injury allowing comparison with published injury prevalence data. Overall the youngest members of society were identified as being the most vulnerable to deliberate injury with young adults accounting for 59% of responses aligning with published data. However, younger adults failed to indicate an awareness of their own vulnerability to deliberate injury in alcohol environments even though 61% of older respondents were aware of this trend. Older respondents were the least inclined to agree that they could make a difference to their own safety in or around the home but were more inclined to agree that they could make a difference to their own safety at work. The results are discussed with a view to using improved awareness of public beliefs about injury to identify barriers to the uptake of injury prevention strategies (e.g. low perceived injury risk) as well as areas where injury prevention strategies may receive public support.
Resumo:
The standard Blanchard-Quah (BQ) decomposition forces aggregate demand and supply shocks to be orthogonal. However, this assumption is problematic for a nation with an inflation target. The very notion of inflation targeting means that monetary policy reacts to changes in aggregate supply. This paper employs a modification of the BQ procedure that allows for correlated shifts in aggregate supply and demand. It is found that shocks to Australian aggregate demand and supply are highly correlated. The estimated shifts in the aggregate demand and supply curves are then used to measure the effects of inflation targeting on the Australian inflation rate and level of GDP.
Resumo:
The notion of pedagogy for anyone in the teaching profession is innocuous. The term itself, is steeped in history but the details of the practice can be elusive. What does it mean for an academic to be embracing pedagogy? The problem is not limited to academics; most teachers baulk at the introduction of a pedagogic agenda and resist attempts to have them reflect on their classroom teaching practice, where ever that classroom might be constituted. This paper explores the application of a pedagogic model (Education Queensland, 2001) which was developed in the context of primary and secondary teaching and was part of a schooling agenda to improve pedagogy. As a teacher educator I introduced the model to classroom teachers (Hill, 2002) using an Appreciative Inquiry (Cooperrider and Srivastva 1987) model and at the same time applied the model to my own pedagogy as an academic. Despite being instigated as a model for classroom teachers, I found through my own practitioner investigation that the model was useful for exploring my own pedagogy as a university academic (Hill, 2007, 2008). Cooperrider, D.L. and Srivastva, S. (1987) Appreciative inquiry in organisational life, in Passmore, W. and Woodman, R. (Eds) Research in Organisational Changes and Development (Vol 1) Greenwich, CT: JAI Press. Pp 129-69 Education Queensland (2001) School Reform Longitudinal Study (QSRLS), Brisbane, Queensland Government. Hill, G. (2002, December ) Reflecting on professional practice with a cracked mirror: Productive Pedagogy experiences. Australian Association for Research in Education Conference. Brisbane, Australia. Hill, G. (2007) Making the assessment criteria explicit through writing feedback: A pedagogical approach to developing academic writing. International Journal of Pedagogies and Learning 3(1), 59-66. Hill, G. (2008) Supervising Practice Based Research. Studies in Learning, Evaluation, Innovation and Development, 5(4), 78-87
Resumo:
The contribution of risky behaviour to the increased crash and fatality rates of young novice drivers is recognised in the road safety literature around the world. Exploring such risky driver behaviour has led to the development of tools like the Driver Behaviour Questionnaire (DBQ) to examine driving violations, errors, and lapses [1]. Whilst the DBQ has been utilised in young novice driver research, some items within this tool seem specifically designed for the older, more experienced driver, whilst others appear to asses both behaviour and related motives. The current study was prompted by the need for a risky behaviour measurement tool that can be utilised with young drivers with a provisional driving licence. Sixty-three items exploring young driver risky behaviour developed from the road safety literature were incorporated into an online survey. These items assessed driver, passenger, journey, car and crash-related issues. A sample of 476 drivers aged 17-25 years (M = 19, SD = 1.59 years) with a provisional driving licence and matched for age, gender, and education were drawn from a state-wide sample of 761 young drivers who completed the survey. Factor analysis based upon a principal components extraction of factors was followed by an oblique rotation to investigate the underlying dimensions to young novice driver risky behaviour. A five factor solution comprising 44 items was identified, accounting for 55% of the variance in young driver risky behaviour. Factor 1 accounted for 32.5% of the variance and appeared to measure driving violations that were transient in nature - risky behaviours that followed risky decisions that occurred during the journey (e.g., speeding). Factor 2 accounted for 10.0% of variance and appeared to measure driving violations that were fixed in nature; the risky decisions being undertaken before the journey (e.g., drink driving). Factor 3 accounted for 5.4% of variance and appeared to measure misjudgment (e.g., misjudged speed of oncoming vehicle). Factor 4 accounted for 4.3% of variance and appeared to measure risky driving exposure (e.g., driving at night with friends as passengers). Factor 5 accounted for 2.8% of variance and appeared to measure driver emotions or mood (e.g., anger). Given that the aim of the study was to create a research tool, the factors informed the development of five subscales and one composite scale. The composite scale had a very high internal consistency measure (Cronbach’s alpha) of .947. Self-reported data relating to police-detected driving offences, their crash involvement, and their intentions to break road rules within the next year were also collected. While the composite scale was only weakly correlated with self-reported crashes (r = .16, p < .001), it was moderately correlated with offences (r = .26, p < .001), and highly correlated with their intentions to break the road rules (r = .57, p < .001). Further application of the developed scale is needed to confirm the factor structure within other samples of young drivers both in Australia and in other countries. In addition, future research could explore the applicability of the scale for investigating the behaviour of other types of drivers.
Resumo:
Serotonergic hypofunction is associated with a depressive mood state, an increased drive to eat and preference for sweet (SW) foods. High-trait anxiety individuals are characterised by a functional shortage of serotonin during stress, which in turn increases their susceptibility to experience a negative mood and an increased drive for SW foods. The present study examined whether an acute dietary manipulation, intended to increase circulating serotonin levels, alleviated the detrimental effects of a stress-inducing task on subjective appetite and mood sensations, and preference for SW foods in high-trait anxiety individuals. Thirteen high- (eleven females and two males; anxiety scores 45·5 (sd 5·9); BMI 22·9 (sd 3·0)kg/m2) and twelve low- (ten females and two males; anxiety scores 30·4 (sd 4·8); BMI 23·4 (sd 2·5) kg/m2) trait anxiety individuals participated in a placebo-controlled, two-way crossover design. Participants were provided with 40 g α-lactalbumin (LAC; l-tryptophan (Trp):large neutral amino acids (LNAA) ratio of 7·6) and 40 g casein (placebo) (Trp:LNAA ratio of 4·0) in the form of a snack and lunch on two test days. On both the test days, participants completed a stress-inducing task 2 h after the lunch. Mood and appetite were assessed using visual analogue scales. Changes in food hedonics for different taste and nutrient combinations were assessed using a computer task. The results demonstrated that the LAC manipulation did not exert any immediate effects on mood or appetite. However, LAC did have an effect on food hedonics in individuals with high-trait anxiety after acute stress. These individuals expressed a lower liking (P = 0·012) and SW food preference (P = 0·014) after the stressful task when supplemented with LAC.
Resumo:
Annual reports are an important component of New Zealand schools’ public accountability. Through the annual report the governance body informs stakeholders about school aims, objectives, achievements, use of resources, and financial performance. This paper identifies the perceived usefulness of the school annual report to recipients and the extent to which it serves as an instrument of accountability and/or decision-usefulness. The study finds that the annual report is used for a variety of purposes, including: to determine if the school has conducted its activities effectively and achieved stated objectives and goals; to examine student achievements; to assess financial accountability and performance; and to make decisions about the school as a suitable environment for their child/children. Nevertheless, the study also finds that other forms of communication are more important sources of information about the school than the annual report which is seen to fall short of users’ required qualities of understandability, reliability and readability. It would appear imperative that policy makers review the functional role of the school annual report which is a costly document to prepare. Further, school managers need to engage in alternative means to communicate sufficient and meaningful information in the discharge of public accountability.
Resumo:
This thesis aimed to investigate the way in which distance runners modulate their speed in an effort to understand the key processes and determinants of speed selection when encountering hills in natural outdoor environments. One factor which has limited the expansion of knowledge in this area has been a reliance on the motorized treadmill which constrains runners to constant speeds and gradients and only linear paths. Conversely, limits in the portability or storage capacity of available technology have restricted field research to brief durations and level courses. Therefore another aim of this thesis was to evaluate the capacity of lightweight, portable technology to measure running speed in outdoor undulating terrain. The first study of this thesis assessed the validity of a non-differential GPS to measure speed, displacement and position during human locomotion. Three healthy participants walked and ran over straight and curved courses for 59 and 34 trials respectively. A non-differential GPS receiver provided speed data by Doppler Shift and change in GPS position over time, which were compared with actual speeds determined by chronometry. Displacement data from the GPS were compared with a surveyed 100m section, while static positions were collected for 1 hour and compared with the known geodetic point. GPS speed values on the straight course were found to be closely correlated with actual speeds (Doppler shift: r = 0.9994, p < 0.001, Δ GPS position/time: r = 0.9984, p < 0.001). Actual speed errors were lowest using the Doppler shift method (90.8% of values within ± 0.1 m.sec -1). Speed was slightly underestimated on a curved path, though still highly correlated with actual speed (Doppler shift: r = 0.9985, p < 0.001, Δ GPS distance/time: r = 0.9973, p < 0.001). Distance measured by GPS was 100.46 ± 0.49m, while 86.5% of static points were within 1.5m of the actual geodetic point (mean error: 1.08 ± 0.34m, range 0.69-2.10m). Non-differential GPS demonstrated a highly accurate estimation of speed across a wide range of human locomotion velocities using only the raw signal data with a minimal decrease in accuracy around bends. This high level of resolution was matched by accurate displacement and position data. Coupled with reduced size, cost and ease of use, the use of a non-differential receiver offers a valid alternative to differential GPS in the study of overground locomotion. The second study of this dissertation examined speed regulation during overground running on a hilly course. Following an initial laboratory session to calculate physiological thresholds (VO2 max and ventilatory thresholds), eight experienced long distance runners completed a self- paced time trial over three laps of an outdoor course involving uphill, downhill and level sections. A portable gas analyser, GPS receiver and activity monitor were used to collect physiological, speed and stride frequency data. Participants ran 23% slower on uphills and 13.8% faster on downhills compared with level sections. Speeds on level sections were significantly different for 78.4 ± 7.0 seconds following an uphill and 23.6 ± 2.2 seconds following a downhill. Speed changes were primarily regulated by stride length which was 20.5% shorter uphill and 16.2% longer downhill, while stride frequency was relatively stable. Oxygen consumption averaged 100.4% of runner’s individual ventilatory thresholds on uphills, 78.9% on downhills and 89.3% on level sections. Group level speed was highly predicted using a modified gradient factor (r2 = 0.89). Individuals adopted distinct pacing strategies, both across laps and as a function of gradient. Speed was best predicted using a weighted factor to account for prior and current gradients. Oxygen consumption (VO2) limited runner’s speeds only on uphill sections, and was maintained in line with individual ventilatory thresholds. Running speed showed larger individual variation on downhill sections, while speed on the level was systematically influenced by the preceding gradient. Runners who varied their pace more as a function of gradient showed a more consistent level of oxygen consumption. These results suggest that optimising time on the level sections after hills offers the greatest potential to minimise overall time when running over undulating terrain. The third study of this thesis investigated the effect of implementing an individualised pacing strategy on running performance over an undulating course. Six trained distance runners completed three trials involving four laps (9968m) of an outdoor course involving uphill, downhill and level sections. The initial trial was self-paced in the absence of any temporal feedback. For the second and third field trials, runners were paced for the first three laps (7476m) according to two different regimes (Intervention or Control) by matching desired goal times for subsections within each gradient. The fourth lap (2492m) was completed without pacing. Goals for the Intervention trial were based on findings from study two using a modified gradient factor and elapsed distance to predict the time for each section. To maintain the same overall time across all paced conditions, times were proportionately adjusted according to split times from the self-paced trial. The alternative pacing strategy (Control) used the original split times from this initial trial. Five of the six runners increased their range of uphill to downhill speeds on the Intervention trial by more than 30%, but this was unsuccessful in achieving a more consistent level of oxygen consumption with only one runner showing a change of more than 10%. Group level adherence to the Intervention strategy was lowest on downhill sections. Three runners successfully adhered to the Intervention pacing strategy which was gauged by a low Root Mean Square error across subsections and gradients. Of these three, the two who had the largest change in uphill-downhill speeds ran their fastest overall time. This suggests that for some runners the strategy of varying speeds systematically to account for gradients and transitions may benefit race performances on courses involving hills. In summary, a non – differential receiver was found to offer highly accurate measures of speed, distance and position across the range of human locomotion speeds. Self-selected speed was found to be best predicted using a weighted factor to account for prior and current gradients. Oxygen consumption limited runner’s speeds only on uphills, speed on the level was systematically influenced by preceding gradients, while there was a much larger individual variation on downhill sections. Individuals were found to adopt distinct but unrelated pacing strategies as a function of durations and gradients, while runners who varied pace more as a function of gradient showed a more consistent level of oxygen consumption. Finally, the implementation of an individualised pacing strategy to account for gradients and transitions greatly increased runners’ range of uphill-downhill speeds and was able to improve performance in some runners. The efficiency of various gradient-speed trade- offs and the factors limiting faster downhill speeds will however require further investigation to further improve the effectiveness of the suggested strategy.