967 resultados para driving while impaired
Resumo:
As the world’s population is growing, so is the demand for agricultural products. However, natural nitrogen (N) fixation and phosphorus (P) availability cannot sustain the rising agricultural production, thus, the application of N and P fertilisers as additional nutrient sources is common. It is those anthropogenic activities that can contribute high amounts of organic and inorganic nutrients to both surface and groundwaters resulting in degradation of water quality and a possible reduction of aquatic life. In addition, runoff and sewage from urban and residential areas can contain high amounts of inorganic and organic nutrients which may also affect water quality. For example, blooms of the cyanobacterium Lyngbya majuscula along the coastline of southeast Queensland are an indicator of at least short term decreases of water quality. Although Australian catchments, including those with intensive forms of land use, show in general a low export of nutrients compared to North American and European catchments, certain land use practices may still have a detrimental effect on the coastal environment. Numerous studies are reported on nutrient cycling and associated processes on a catchment scale in the Northern Hemisphere. Comparable studies in Australia, in particular in subtropical regions are, however, limited and there is a paucity in the data, in particular for inorganic and organic forms of nitrogen and phosphorus; these nutrients are important limiting factors in surface waters to promote algal blooms. Therefore, the monitoring of N and P and understanding the sources and pathways of these nutrients within a catchment is important in coastal zone management. Although Australia is the driest continent, in subtropical regions such as southeast Queensland, rainfall patterns have a significant effect on runoff and thus the nutrient cycle at a catchment scale. Increasingly, these rainfall patterns are becoming variable. The monitoring of these climatic conditions and the hydrological response of agricultural catchments is therefore also important to reduce the anthropogenic effects on surface and groundwater quality. This study consists of an integrated hydrological–hydrochemical approach that assesses N and P in an environment with multiple land uses. The main aim is to determine the nutrient cycle within a representative coastal catchment in southeast Queensland, the Elimbah Creek catchment. In particular, the investigation confirms the influence associated with forestry and agriculture on N and P forms, sources, distribution and fate in the surface and groundwaters of this subtropical setting. In addition, the study determines whether N and P are subject to transport into the adjacent estuary and thus into the marine environment; also considered is the effect of local topography, soils and geology on N and P sources and distribution. The thesis is structured on four components individually reported. The first paper determines the controls of catchment settings and processes on stream water, riverbank sediment, and shallow groundwater N and P concentrations, in particular during the extended dry conditions that were encountered during the study. Temporal and spatial factors such as seasonal changes, soil character, land use and catchment morphology are considered as well as their effect on controls over distributions of N and P in surface waters and associated groundwater. A total number of 30 surface and 13 shallow groundwater sampling sites were established throughout the catchment to represent dominant soil types and the land use upstream of each sampling location. Sampling comprises five rounds and was conducted over one year between October 2008 and November 2009. Surface water and groundwater samples were analysed for all major dissolved inorganic forms of N and for total N. Phosphorus was determined in the form of dissolved reactive P (predominantly orthophosphate) and total P. In addition, extracts of stream bank sediments and soil grab samples were analysed for these N and P species. Findings show that major storm events, in particular after long periods of drought conditions, are the driving force of N cycling. This is expressed by higher inorganic N concentrations in the agricultural subcatchment compared to the forested subcatchment. Nitrate N is the dominant inorganic form of N in both the surface and groundwaters and values are significantly higher in the groundwaters. Concentrations in the surface water range from 0.03 to 0.34 mg N L..1; organic N concentrations are considerably higher (average range: 0.33 to 0.85 mg N L..1), in particular in the forested subcatchment. Average NO3-N in the groundwater has a range of 0.39 to 2.08 mg N L..1, and organic N averages between 0.07 and 0.3 mg N L..1. The stream bank sediments are dominated by organic N (range: 0.53 to 0.65 mg N L..1), and the dominant inorganic form of N is NH4-N with values ranging between 0.38 and 0.41 mg N L..1. Topography and soils, however, were not to have a significant effect on N and P concentrations in waters. Detectable phosphorus in the surface and groundwaters of the catchment is limited to several locations typically in the proximity of areas with intensive animal use; in soil and sediments, P is negligible. In the second paper, the stable isotopes of N (14N/15N) and H2O (16O/18O and 2H/H) in surface and groundwaters are used to identify sources of dissolved inorganic and organic N in these waters, and to determine their pathways within the catchment; specific emphasis is placed on the relation of forestry and agriculture. Forestry is predominantly concentrated in the northern subcatchment (Beerburrum Creek) while agriculture is mainly found in the southern subcatchment (Six Mile Creek). Results show that agriculture (horticulture, crops, grazing) is the main source of inorganic N in the surface waters of the agricultural subcatchment, and their isotopic signature shows a close link to evaporation processes that may occur during water storage in farm dams that are used for irrigation. Groundwaters are subject to denitrification processes that may result in reduced dissolved inorganic N concentrations. Soil organic matter delivers most of the inorganic N to the surface water in the forested subcatchment. Here, precipitation and subsequently runoff is the main source of the surface waters. Groundwater in this area is affected by agricultural processes. The findings also show that the catchment can attenuate the effects of anthropogenic land use on surface water quality. Riparian strips of natural remnant vegetation, commonly 50 to 100 m in width, act as buffer zones along the drainage lines in the catchment and remove inorganic N from the soil water before it enters the creek. These riparian buffer zones are common in most agricultural catchments of southeast Queensland and are indicated to reduce the impact of agriculture on stream water quality and subsequently on the estuary and marine environments. This reduction is expressed by a significant decrease in DIN concentrations from 1.6 mg N L..1 to 0.09 mg N L..1, and a decrease in the �15N signatures from upstream surface water locations downstream to the outlet of the agricultural subcatchment. Further testing is, however, necessary to confirm these processes. Most importantly, the amount of N that is transported to the adjacent estuary is shown to be negligible. The third and fourth components of the thesis use a hydrological catchment model approach to determine the water balance of the Elimbah Creek catchment. The model is then used to simulate the effects of land use on the water balance and nutrient loads of the study area. The tool that is used is the internationally widely applied Soil and Water Assessment Tool (SWAT). Knowledge about the water cycle of a catchment is imperative in nutrient studies as processes such as rainfall, surface runoff, soil infiltration and routing of water through the drainage system are the driving forces of the catchment nutrient cycle. Long-term information about discharge volumes of the creeks and rivers do, however, not exist for a number of agricultural catchments in southeast Queensland, and such information is necessary to calibrate and validate numerical models. Therefore, a two-step modelling approach was used to calibrate and validate parameters values from a near-by gauged reference catchment as starting values for the ungauged Elimbah Creek catchment. Transposing monthly calibrated and validated parameter values from the reference catchment to the ungauged catchment significantly improved model performance showing that the hydrological model of the catchment of interest is a strong predictor of the water water balance. The model efficiency coefficient EF shows that 94% of the simulated discharge matches the observed flow whereas only 54% of the observed streamflow was simulated by the SWAT model prior to using the validated values from the reference catchment. In addition, the hydrological model confirmed that total surface runoff contributes the majority of flow to the surface water in the catchment (65%). Only a small proportion of the water in the creek is contributed by total base-flow (35%). This finding supports the results of the stable isotopes 16O/18O and 2H/H, which show the main source of water in the creeks is either from local precipitation or irrigation waters delivered by surface runoff; a contribution from the groundwater (baseflow) to the creeks could not be identified using 16O/18O and 2H/H. In addition, the SWAT model calculated that around 68% of the rainfall occurring in the catchment is lost through evapotranspiration reflecting the prevailing long-term drought conditions that were observed prior and during the study. Stream discharge from the forested subcatchment was an order of magnitude lower than discharge from the agricultural Six Mile Creek subcatchment. A change in land use from forestry to agriculture did not significantly change the catchment water balance, however, nutrient loads increased considerably. Conversely, a simulated change from agriculture to forestry resulted in a significant decrease of nitrogen loads. The findings of the thesis and the approach used are shown to be of value to catchment water quality monitoring on a wider scale, in particular the implications of mixed land use on nutrient forms, distributions and concentrations. The study confirms that in the tropics and subtropics the water balance is affected by extended dry periods and seasonal rainfall with intensive storm events. In particular, the comprehensive data set of inorganic and organic N and P forms in the surface and groundwaters of this subtropical setting acquired during the one year sampling program may be used in similar catchment hydrological studies where these detailed information is missing. Also, the study concludes that riparian buffer zones along the catchment drainage system attenuate the transport of nitrogen from agricultural sources in the surface water. Concentrations of N decreased from upstream to downstream locations and were negligible at the outlet of the catchment.
Resumo:
Adolescent injury remains a significant public health concern and is often the result of at-risk transport related behaviours. When a person is injured actions taken by bystanders are of crucial importance and timely first aid appears to reduce the severity of some injuries (Hussain & Redmond, 1994). Accordingly, researchers have suggested that first aid training should be more widely available as a potential strategy to reduce injury (Lynch et al., 2006). Further research has identified schools as an ideal setting for learning first aid skills as a means of injury prevention (Maitra, 1997). The current research examines the implications of school based first aid training for young adolescents on injury prevention, particularly relating to transport injuries. First aid training was integrated with peer protection and school connectedness within the Skills for Preventing Injury in Youth (SPIY) program (Buckley & Sheehan, 2009) and evaluated to determine if there was a reduction in the likelihood of transport related injuries at six months post-intervention. In Queensland, Australia, 35 high schools were recruited and randomly assigned to intervention and control conditions in early April 2012. A total of 2,000 Year nine students (mean age 13.5 years, 39% male) completed surveys six months post-intervention in November 2012. Analyses will compare the intervention students with control group students who self-reported i) first aid training with a teacher, professional or other adult and ii) no first aid in the preceding six months. Using the Extended Adolescent Injury Checklist (E-AIC) (Chapman, Buckley & Sheehan, 2011) the transport related injury experiences included being injured while “riding as a passenger in a car”, “driving a car off road” and “riding a bicycle”. It is expected that students taught first aid within SPIY will report significantly fewer transport related injuries in the previous three months, compared to the control groups described above. Analyses will be conducted separately for sex and socio-economic class of schools. Findings from this study will provide insight into the value of first aid in adolescent injury prevention and provide evidence as to whether teaching first aid skills within a school based health education curriculum has traffic safety implications.
Resumo:
Introduction: Although advances in treatment modalities have improved the survival of head and neck (H&N) cancer patients over recent years, survivors’ quality of life (QoL) could be impaired for a number of reasons. The investigation of QoL determinants can inform the design of supportive interventions for this population. Objectives: To examine the QoL of H&N cancer survivors at 1 year after treatment and to identify potential determinants affecting their QoL. Methods: A systematic search of literature was done in December 2011 in five databases: Pubmed, Medline, Scopus, Sciencedirect and CINAHL, using combined search terms ‘head and neck cancer’, ‘quality of life’, ‘health-related quality of life’ and ‘systematic review’. The methodological qualities of selected studies were assessed by two reviewers using predefined criteria. The study characteristics and results were abstracted and summarized. Results: Thirty-seven studies met all inclusion criteria with methodological quality from moderate to high. The global QoL of H&N cancer survivors returned to baseline at 1 year after treatment. Significant improvement showed in emotional functioning while physical functioning, xerostomia, sticky/insufficient saliva, and fatigue were consistently worse at 12 months compared with baseline. Age, cancer sites and stages, social support, smoking, presence of feeding tube are significant QoL determinants at 12 months. Conclusions: Although the global QoL of H&N cancer survivors recover by 12 months after treatment, problems with physical functioning, fatigue, xerostomia and sticky saliva persist. Regular assessment should be carried out to monitor these problems. Further research is required to develop appropriate and effective interventions for this population.
Resumo:
A finely-tuned innate immune response plays a pivotal role in protecting host against bacterial invasion during periodontal disease progression. Hyperlipidemia has been suggested to exacerbate periodontal health condition. However, the underlying mechanism has not been addressed. In the present study, we investigated the effect of hyperlipidemia on innate immune responses to periodontal pathogen Porphyromonas gingivalis infection. Apolipoprotein E-deficient and wild-type mice at the age of 20 weeks were used for the study. Peritoneal macrophages were isolated and subsequently used for the study of viable P. gingivalis infection. ApoE−/− mice demonstrated inhibited iNOS production and impaired clearance of P. gingivalis in vitro and in vivo; furthermore, ApoE−/− mice displayed disrupted cytokine production pattern in response to P. gingivalis, with a decreased production of tumor necrosis factor-α, interleukin-6 (IL-6), IL-1β and monocyte chemotactic protein-1. Microarray data demonstrated that Toll-like receptor (TLR) and NOD-like receptor (NLR) pathway were altered in ApoE−/− mice macrophages; further analysis of pattern recognition receptors (PRRs) demonstrated that expression of triggering receptors on myeloid cells-1 (TREM-1), an amplifier of the TLR and NLR pathway, was decreased in ApoE−/− mice macrophages, leading to decreased recruitment of NF-κB onto the promoters of the TNF-α and IL-6. Our data suggest that in ApoE−/− mice hyperlipidemia disrupts the expression of PRRs, and cripples the host’s capability to generate sufficient innate immune response to P. gingivalis, which may facilitate immune evasion, subgingival colonization and establishment of P. gingivalis in the periodontal niche.
Resumo:
Young male drivers are over-represented in road-related fatalities. Speeding represents a pervasive and significant contributor to road trauma. Anti-speeding messages represent a long-standing strategy aimed at discouraging drivers from speeding. These messages, however, have not always achieved their persuasive objectives which may be due, in part, to them not always targeting the most salient beliefs underpinning the speeding behavior of particular driver groups. The current study elicited key beliefs underpinning speeding behavior as well as strategies used to avoid speeding, using a well-validated belief-based model, the Theory of Planned Behavior and in-depth qualitative methods. To obtain the most comprehensive understanding about the salient beliefs and strategies of young male drivers, how such beliefs and strategies compared with those of drivers of varying ages and gender, was also explored. Overall, 75 males and females (aged 17-25 or 30-55 years) participated in group discussions. The findings revealed beliefs that were particularly relevant to young males and that would likely represent key foci for developing message content. For instance, the need to feel in control and the desire to experience positive affect when driving were salient advantages; while infringements were a salient disadvantage and, in particular, the loss of points and the implications associated with potential licence loss as opposed to the monetary (fine) loss (behavioral beliefs). For normative influences, young males appeared to hold notable misperceptions (compared with other drivers, such as young females); for instance, young males believed that females/girlfriends were impressed by their speeding. In the case of control beliefs, the findings revealed low perceptions of control with respect to being able to not speed and a belief that something “extraordinary” would need to happen for a young male driver to lose control of their vehicle while speeding. The practical implications of the findings, in terms of providing suggestions for devising the content of anti-speeding messages, are discussed.
Resumo:
The use of intelligent transport systems is proliferating across the Australian road network, particularly on major freeways. New technology allows a greater range of signs and messages to be displayed to drivers. While there has been a long history of human factors analyses of signage, no evaluation has been conducted on this novel, sometimes dynamic, signage or potential interactions when co-located. The purpose of this driving simulator study was to investigate drivers’ behavioural changes and comprehension resulting from the co-location of Lane Use Management Systems with static signs and (Enhanced) Variable Message Signs on Queensland motorways. A section of motorway was simulated, and nine scenarios were developed which presented a combination of signage cases across levels of driving task complexity. Two higher-risk road user groups were targeted for this research on an advanced driving simulator: older (65+ years, N=21) and younger (18-22 years, N=20) drivers. Changes in sign co-location and task complexity had small effect on driver comprehension of the signs and vehicle dynamics variables, including difference with the posted speed limit, headway, standard deviation of lane keeping and brake jerks. However, increasing the amount of information provided to drivers at a given location (by co-locating several signs) increased participants’ gaze duration on the signs. With co-location of signs and without added task complexity, a single gaze was over 2s for more than half of the population tested for both groups, and up to 6 seconds for some individuals.
Resumo:
Background: Side effects of the medications used for procedural sedation and analgesia in the cardiac catheterisation laboratory are known to cause impaired respiratory function. Impaired respiratory function poses considerable risk to patient safety as it can lead to inadequate oxygenation. Having knowledge about the conditions that predict impaired respiratory function prior to the procedure would enable nurses to identify at-risk patients and selectively implement intensive respiratory monitoring. This would reduce the possibility of inadequate oxygenation occurring. Aim: To identify pre-procedure risk factors for impaired respiratory function during nurse-administered procedural sedation and analgesia in the cardiac catheterisation laboratory. Design: Retrospective matched case–control. Methods: 21 cases of impaired respiratory function were identified and matched to 113 controls from a consecutive cohort of patients over 18 years of age. Conditional logistic regression was used to identify risk factors for impaired respiratory function. Results: With each additional indicator of acute illness, case patients were nearly two times more likely than their controls to experience impaired respiratory function (OR 1.78; 95% CI 1.19–2.67; p = 0.005). Indicators of acute illness included emergency admission, being transferred from a critical care unit for the procedure or requiring respiratory or haemodynamic support in the lead up to the procedure. Conclusion: Several factors that predict the likelihood of impaired respiratory function were identified. The results from this study could be used to inform prospective studies investigating the effectiveness of interventions for impaired respiratory function during nurse-administered procedural sedation and analgesia in the cardiac catheterisation laboratory.
Resumo:
The cardiac catheterisation laboratory (CCL) is a specialised medical radiology facility where both chronic-stable and life-threatening cardiovascular illness is evaluated and treated. Although there are many potential sources of discomfort and distress associated with procedures performed in the CCL, a general anaesthetic is not usually required. For this reason, an anaesthetist is not routinely assigned to the CCL. Instead, to manage pain, discomfort and anxiety during the procedure, nurses administer a combination of sedative and analgesic medications according to direction from the cardiologist performing the procedure. This practice is referred to as nurse-administered procedural sedation and analgesia (PSA). While anecdotal evidence suggested that nurse-administered PSA was commonly used in the CCL, it was clear from the limited information available that current nurse-led PSA administration and monitoring practices varied and that there was contention around some aspects of practice including the type of medications that were suitable to be used and the depth of sedation that could be safely induced without an anaesthetist present. The overall aim of the program of research presented in this thesis was to establish an evidence base for nurse-led sedation practices in the CCL context. A sequential mixed methods design was used over three phases. The objective of the first phase was to appraise the existing evidence for nurse-administered PSA in the CCL. Two studies were conducted. The first study was an integrative review of empirical research studies and clinical practice guidelines focused on nurse-administered PSA in the CCL as well as in other similar procedural settings. This was the first review to systematically appraise the available evidence supporting the use of nurse-administered PSA in the CCL. A major finding was that, overall, nurse-administered PSA in the CCL was generally deemed to be safe. However, it was concluded from the analysis of the studies and the guidelines that were included in the review, that the management of sedation in the CCL was impacted by a variety of contextual factors including local hospital policy, workforce constraints and cardiologists’ preferences for the type of sedation used. The second study in the first phase was conducted to identify a sedation scale that could be used to monitor level of sedation during nurse-administered PSA in the CCL. It involved a structured literature review and psychometric analysis of scale properties. However, only one scale was found that was developed specifically for the CCL, which had not undergone psychometric testing. Several weaknesses were identified in its item structure. Other sedation scales that were identified were developed for the ICU. Although these scales have demonstrated validity and reliability in the ICU, weaknesses in their item structure precluded their use in the CCL. As findings indicated that no existing sedation scale should be applied to practice in the CCL, recommendations for the development and psychometric testing of a new sedation scale were developed. The objective of the second phase of the program of research was to explore current practice. Three studies were conducted in this phase using both quantitative and qualitative research methods. The first was a qualitative explorative study of nurses’ perceptions of the issues and challenges associated with nurse-administered PSA in the CCL. Major themes emerged from analysis of the qualitative data regarding the lack of access to anaesthetists, the limitations of sedative medications, the barriers to effective patient monitoring and the impact that the increasing complexity of procedures has on patients' sedation requirements. The second study in Phase Two was a cross-sectional survey of nurse-administered PSA practice in Australian and New Zealand CCLs. This was the first study to quantify the frequency that nurse-administered PSA was used in the CCL setting and to characterise associated nursing practices. It was found that nearly all CCLs utilise nurse-administered PSA (94%). Of note, by characterising nurse-administered PSA in Australian and New Zealand CCLs, several strategies to improve practice, such as setting up protocols for patient monitoring and establishing comprehensive PSA education for CCL nurses, were identified. The third study in Phase Two was a matched case-control study of risk factors for impaired respiratory function during nurse-administered PSA in the CCL setting. Patients with acute illness were found to be nearly twice as likely to experience impaired respiratory function during nurse-administered PSA (OR=1.78; 95%CI=1.19-2.67; p=0.005). These significant findings can now be used to inform prospective studies investigating the effectiveness of interventions for impaired respiratory function during nurse-administered PSA in the CCL. The objective of the third and final phase of the program of research was to develop recommendations for practice. To achieve this objective, a synthesis of findings from the previous phases of the program of research informed a modified Delphi study, which was conducted to develop a set of clinical practice guidelines for nurse-administered PSA in the CCL. The clinical practice guidelines that were developed set current best practice standards for pre-procedural patient assessment and risk screening practices as well as the intra and post-procedural patient monitoring practices that nurses who administer PSA in the CCL should undertake in order to deliver safe, evidence-based and consistent care to the many patients who undergo procedures in this setting. In summary, the mixed methods approach that was used clearly enabled the research objectives to be comprehensively addressed in an informed sequential manner, and, as a consequence, this thesis has generated a substantial amount of new knowledge to inform and support nurse-led sedation practice in the CCL context. However, a limitation of the research to note is that the comprehensive appraisal of the evidence conducted, combined with the guideline development process, highlighted that there were numerous deficiencies in the evidence base. As such, rather than being based on high-level evidence, many of the recommendations for practice were produced by consensus. For this reason, further research is required in order to ascertain which specific practices result in the most optimal patient and health service outcomes. Therefore, along with necessary guideline implementation and evaluation projects, post-doctoral research is planned to follow up on the research gaps identified, which are planned to form part of a continuing program of research in this field.
Resumo:
Impaired respiratory function (IRF) during procedural sedation and analgesia (PSA) poses considerable risk to patient safety as it can lead to inadequate oxygenation and ventilation. Risk factors that can be screened prior to the procedure have not been identified for the cardiac catheterization laboratory (CCL).
Resumo:
In Social Science (Organization Studies, Economics, Management Science, Strategy, International Relations, Political Science…) the quest for addressing the question “what is a good practitioner?” has been around for centuries, with the underlying assumptions that good practitioners should lead organizations to higher levels of performance. Hence to ask “what is a good “captain”?” is not a new question, we should add! (e.g. Tsoukas & Cummings, 1997, p. 670; Söderlund, 2004, p. 190). This interrogation leads to consider problems such as the relations between dichotomies Theory and Practice, rigor and relevance of research, ways of knowing and knowledge forms. On the one hand we face the “Enlightenment” assumptions underlying modern positivist Social science, grounded in “unity-of-science dream of transforming and reducing all kinds of knowledge to one basic form and level” and cause-effects relationships (Eikeland, 2012, p. 20), and on the other, the postmodern interpretivist proposal, and its “tendency to make all kinds of knowing equivalent” (Eikeland, 2012, p. 20). In the project management space, this aims at addressing one of the fundamental problems in the field: projects still do not deliver their expected benefits and promises and therefore the socio-economical good (Hodgson & Cicmil, 2007; Bredillet, 2010, Lalonde et al., 2012). The Cartesian tradition supporting projects research and practice for the last 60 years (Bredillet, 2010, p. 4) has led to the lack of relevance to practice of the current conceptual base of project management, despite the sum of research, development of standards, best & good practices and the related development of project management bodies of knowledge (Packendorff, 1995, p. 319-323; Cicmil & Hodgson, 2006, p. 2–6, Hodgson & Cicmil, 2007, p. 436–7; Winter et al., 2006, p. 638). Referring to both Hodgson (2002) and Giddens (1993), we could say that “those who expect a “social-scientific Newton” to revolutionize this young field “are not only waiting for a train that will not arrive, but are in the wrong station altogether” (Hodgson, 2002, p. 809; Giddens, 1993, p. 18). While, in the postmodern stream mainly rooted in the “practice turn” (e.g. Hällgren & Lindahl, 2012), the shift from methodological individualism to social viscosity and the advocated pluralism lead to reinforce the “functional stupidity” (Alvesson & Spicer, 2012, p. 1194) this postmodern stream aims at overcoming. We suggest here that addressing the question “what is a good PM?” requires a philosophy of practice perspective to complement the “usual” philosophy of science perspective. The questioning of the modern Cartesian tradition mirrors a similar one made within Social science (Say, 1964; Koontz, 1961, 1980; Menger, 1985; Warry, 1992; Rothbard, 1997a; Tsoukas & Cummings, 1997; Flyvbjerg, 2001; Boisot & McKelvey, 2010), calling for new thinking. In order to get outside the rationalist ‘box’, Toulmin (1990, p. 11), along with Tsoukas & Cummings (1997, p. 655), suggests a possible path, summarizing the thoughts of many authors: “It can cling to the discredited research program of the purely theoretical (i.e. “modern”) philosophy, which will end up by driving it out of business: it can look for new and less exclusively theoretical ways of working, and develop the methods needed for a more practical (“post-modern”) agenda; or it can return to its pre-17th century traditions, and try to recover the lost (“pre-modern”) topics that were side-tracked by Descartes, but can be usefully taken up for the future” (Toulmin, 1990, p. 11). Thus, paradoxically and interestingly, in their quest for the so-called post-modernism, many authors build on “pre-modern” philosophies such as the Aristotelian one (e.g. MacIntyre, 1985, 2007; Tsoukas & Cummings, 1997; Flyvbjerg, 2001; Blomquist et al., 2010; Lalonde et al., 2012). It is perhaps because the post-modern stream emphasizes a dialogic process restricted to reliance on voice and textual representation, it limits the meaning of communicative praxis, and weaking the practice because it turns away attention from more fundamental issues associated with problem-definition and knowledge-for-use in action (Tedlock, 1983, p. 332–4; Schrag, 1986, p. 30, 46–7; Warry, 1992, p. 157). Eikeland suggests that the Aristotelian “gnoseology allows for reconsidering and reintegrating ways of knowing: traditional, practical, tacit, emotional, experiential, intuitive, etc., marginalised and considered insufficient by modernist [and post-modernist] thinking” (Eikeland, 2012, p. 20—21). By contrast with the modernist one-dimensional thinking and relativist and pluralistic post-modernism, we suggest, in a turn to an Aristotelian pre-modern lens, to re-conceptualise (“re” involving here a “re”-turn to pre-modern thinking) the “do” and to shift the perspective from what a good PM is (philosophy of science lens) to what a good PM does (philosophy of practice lens) (Aristotle, 1926a). As Tsoukas & Cummings put it: “In the Aristotelian tradition to call something good is to make a factual statement. To ask, for example, ’what is a good captain’?’ is not to come up with a list of attributes that good captains share (as modem contingency theorists would have it), but to point out the things that those who are recognized as good captains do.” (Tsoukas & Cummings, 1997, p. 670) Thus, this conversation offers a dialogue and deliberation about a central question: What does a good project manager do? The conversation is organized around a critic of the underlying assumptions supporting the modern, post-modern and pre-modern relations to ways of knowing, forms of knowledge and “practice”.
Resumo:
Background: Medication remains the cornerstone treatment for mental illness. Cognition is one of the strongest predictors of non-adherence. The aim of this preliminary investigation was to examine the association between the Large Allen Cognitive Level Screen (LACLS) and medication adherence among a small sample of mental health service users to determine whether the LACLS has potential as a screening tool for capacity to manage medication regimens. Method: Demographic and clinical information was collected from a small sample of people who had recently accessed community mental health services. Participants then completed the LACLS and the Medication Adherence Rating Scale (MARS) at a single time point. The strength of association between the LACLS and MARS was examined using Spearman rank-order correlation. Results: A strong positive correlation between the LACLS and medication adherence (r = 0.71, p = 0.01) was evident. No participants reported the use of medication aids despite evidence of impaired cognitive functioning. Conclusion: This investigation has provided the first empirical evidence indicating that the LACLS may have utility as a screening instrument for capacity to manage medication adherence among this population. While promising, this finding should be interpreted with caveats given its preliminary nature.
In the blink of an eye : the circadian effects on ocular and subjective indices of driver sleepiness
Resumo:
Driver sleepiness contributes substantially to fatal and severe crashes and the contribution it makes to less serious crashes is likely to as great or greater. Currently, drivers’ awareness of sleepiness (subjective sleepiness) remains a critical component for the mitigation of sleep-related crashes. Nonetheless, numerous calls have been made for technological monitors of drivers’ physiological sleepiness levels so drivers can be ‘alerted’ when approaching high levels of sleepiness. Several physiological indices of sleepiness show potential as a reliable metric to monitor drivers’ sleepiness levels, with eye blink indices being a promising candidate. However, extensive evaluations of eye blink measures are lacking including the effects that the endogenous circadian rhythm can have on eye blinks. To examine the utility of ocular measures, 26 participants completed a simulated driving task while physiological measures of blink rate and duration were recorded after partial sleep restriction. To examine the circadian effects participants were randomly assigned to complete either a morning or an afternoon session of the driving task. The results show subjective sleepiness levels increased over the duration of the task. The blink duration index was sensitive to increases in sleepiness during morning testing, but was not sensitive during afternoon testing. This finding suggests that the utility of blink indices as a reliable metric for sleepiness are still far from specific. The subjective measures had the largest effect size when compared to the blink measures. Therefore, awareness of sleepiness still remains a critical factor for driver sleepiness and the mitigation of sleep-related crashes.
Resumo:
Abstract Background: Studies that compare Indigenous Australian and non-Indigenous patients who experience a cardiac event or chest pain are inconclusive about the reasons for the differences in-hospital and survival rates. The advances in diagnostic accuracy, medication and specialised workforce has contributed to a lower case fatality and lengthen survival rates however this is not evident in the Indigenous Australian population. A possible driver contributing to this disparity may be the impact of patient-clinician interface during key interactions during the health care process. Methods/Design: This study will apply an Indigenous framework to describe the interaction between Indigenous patients and clinicians during the continuum of cardiac health care, i.e. from acute admission, secondary and rehabilitative care. Adopting an Indigenous framework is more aligned with Indigenous realities, knowledge, intellects, histories and experiences. A triple layered designed focus group will be employed to discuss patient-clinician engagement. Focus groups will be arranged by geographic clusters i.e. metropolitan and a regional centre. Patient informants will be identified by Indigenous status (i.e. Indigenous and non-Indigenous) and the focus groups will be convened separately. The health care provider focus groups will be convened on an organisational basis i.e. state health providers and Aboriginal Community Controlled Health Services. Yarning will be used as a research method to facilitate discussion. Yarning is in congruence with the oral traditions that are still a reality in day-to-day Indigenous lives. Discussion: This study is nestled in a larger research program that explores the drivers to the disparity of care and health outcomes for Indigenous and non-Indigenous Australians who experience an acute cardiac admission. A focus on health status, risk factors and clinical interventions may camouflage critical issues within a patient-clinician exchange. This approach may provide a way forward to reduce the appalling health disadvantage experienced within the Indigenous Australian communities. Keywords: Patient-clinician engagement, Qualitative, Cardiovascular disease, Focus groups, Indigenous
Resumo:
Purpose Young novice drivers are at considerable risk of injury on the road, and their behaviour appears vulnerable to the social influence of their friends. Research was undertaken to identify the nature and mechanisms of peer influence upon novice driver (16-25 years) behaviour to inform the design of more effective young driver countermeasures. Methods. Peer influence was explored in small group interviews (n = 21) and three surveys (n1 = 761, n2 = 1170, n3 = 390) as part of a larger Queensland-wide study. Surveys two and three were part of a six-month longitudinal study. Results Peer influence was reported from the pre-Licence to the Provisional (intermediate) periods. Young novice drivers who experienced or expected social punishments including ‘being told off’ for risky driving reported less riskiness. Conversely young novice drivers who experienced or expected social rewards such as being ‘cheered on’ by their friends – who were also more risky drivers – reported more risky driving including crashes and offences. Conclusions Peers appear influential in the risky behaviour of young novice drivers, and influence occurs through social mechanisms of reinforcement and sanction. Interventions enhancing positive influence and curtailing negative influence may improve road safety outcomes not only for young novice drivers, but for all persons who share the road with them. Among the interventions warranting further development and evaluation are programs to encourage the modelling of safe driving behaviour and attitudes by young drivers; and minimisation of social reinforcement and promotion of social sanctions for risky driving behaviour in particular.