113 resultados para Discomfort and comfort perception
Resumo:
The Queensland Government has implemented strategies promoting a shift from individual car use to active transport, a transition which requires drivers to adapt to sharing the road with increased numbers of people cycling through transport network. For this to occur safely, changes in both road infrastructure and road user expectations and behaviors will be needed. Creating separate cycle infrastructure does not remove the need for cyclists to commence, cross or finish travel on shared roads. Currently intersections are one of the predominant shared road spaces where crashes result in cyclists being injured or killed. This research investigates how Brisbane cyclists and drivers perceive risk when interacting with other road users at intersections. The current study replicates a French study conducted by co-authors Chaurand and Delhomme in 2011 and extends it to assess gender effects which have been reported in other Australian cycling research. An online survey was administered to experienced cyclists and drivers. Participants rated the level of risk they felt when imagining a number of different road situations. Based on the earlier French study it is expected that perceived crash risk will be influenced both by the participant’s mode of travel and the type of interacting vehicle and perceived risk will be greater when the interaction is with a car than a bicycle. It is predicted that risk perception will decrease as the level of experience increases and that male participants will have a higher perception of skill and lower perception of risk than females. The findings of this Queensland study will provide a valuable insight into perceived risk and the traffic behaviours of drivers and cyclists when interacting with other road users and results will be available for presentation at the Congress.
Resumo:
Background Antibiotics overuse is a global public health issue influenced by several factors, of which some are parent-related psychosocial factors that can only be measured using valid and reliable psychosocial measurement instruments. The PAPA scale was developed to measure these factors and the content validity of this instrument was assessed. Aim This study further validated the recently developed instrument in terms of (1) face validity and (2) construct validity including: deciding the number and nature of factors, and item selection. Methods Questionnaires were self-administered to parents of children between the ages of 0 and 12 years old. Parents were conveniently recruited from schools’ parental meetings in the Eastern Province, Saudi Arabia. Face validity was assessed with regards to questionnaire clarity and unambiguity. Construct validity and item selection processes were conducted using Exploratory factor analysis. Results Parallel analysis and Exploratory factor analysis using principal axis factoring produced six factors in the developed instrument: knowledge and beliefs, behaviours, sources of information, adherence, awareness about antibiotics resistance, and parents’ perception regarding doctors’ prescribing behaviours. Reliability was assessed (Cronbach’s alpha = 0.78) which demonstrates the instrument as being reliable. Conclusion The ‘factors’ produced in this study coincide with the constructs contextually identified in the development phase of other instruments used to study antibiotic use. However, no other study considering perceptions of antibiotic use had gone beyond content validation of such instruments. This study is the first to constructively validate the factors underlying perceptions regarding antibiotic use in any population and in parents in particular.
Resumo:
Visual abnormalities, both at the sensory input and the higher interpretive levels, have been associated with many of the symptoms of schizophrenia. Individuals with schizophrenia typically experience distortions of sensory perception, resulting in perceptual hallucinations and delusions that are related to the observed visual deficits. Disorganised speech, thinking and behaviour are commonly experienced by sufferers of the disorder, and have also been attributed to perceptual disturbances associated with anomalies in visual processing. Compounding these issues are marked deficits in cognitive functioning that are observed in approximately 80% of those with schizophrenia. Cognitive impairments associated with schizophrenia include: difficulty with concentration and memory (i.e. working, visual and verbal), an impaired ability to process complex information, response inhibition and deficits in speed of processing, visual and verbal learning. Deficits in sustained attention or vigilance, poor executive functioning such as poor reasoning, problem solving, and social cognition, are all influenced by impaired visual processing. These symptoms impact on the internal perceptual world of those with schizophrenia, and hamper their ability to navigate their external environment. Visual processing abnormalities in schizophrenia are likely to worsen personal, social and occupational functioning. Binocular rivalry provides a unique opportunity to investigate the processes involved in visual awareness and visual perception. Binocular rivalry is the alternation of perceptual images that occurs when conflicting visual stimuli are presented to each eye in the same retinal location. The observer perceives the opposing images in an alternating fashion, despite the sensory input to each eye remaining constant. Binocular rivalry tasks have been developed to investigate specific parts of the visual system. The research presented in this Thesis provides an explorative investigation into binocular rivalry in schizophrenia, using the method of Pettigrew and Miller (1998) and comparing individuals with schizophrenia to healthy controls. This method allows manipulations to the spatial and temporal frequency, luminance contrast and chromaticity of the visual stimuli. Manipulations to the rival stimuli affect the rate of binocular rivalry alternations and the time spent perceiving each image (dominance duration). Binocular rivalry rate and dominance durations provide useful measures to investigate aspects of visual neural processing that lead to the perceptual disturbances and cognitive dysfunction attributed to schizophrenia. However, despite this promise the binocular rivalry phenomenon has not been extensively explored in schizophrenia to date. Following a review of the literature, the research in this Thesis examined individual variation in binocular rivalry. The initial study (Chapter 2) explored the effect of systematically altering the properties of the stimuli (i.e. spatial and temporal frequency, luminance contrast and chromaticity) on binocular rivalry rate and dominance durations in healthy individuals (n=20). The findings showed that altering the stimuli with respect to temporal frequency and luminance contrast significantly affected rate. This is significant as processing of temporal frequency and luminance contrast have consistently been demonstrated to be abnormal in schizophrenia. The current research then explored binocular rivalry in schizophrenia. The primary research question was, "Are binocular rivalry rates and dominance durations recorded in participants with schizophrenia different to those of the controls?" In this second study binocular rivalry data that were collected using low- and highstrength binocular rivalry were compared to alternations recorded during a monocular rivalry task, the Necker Cube task to replicate and advance the work of Miller et al., (2003). Participants with schizophrenia (n=20) recorded fewer alternations (i.e. slower alternation rates) than control participants (n=20) on both binocular rivalry tasks, however no difference was observed between the groups on the Necker cube task. Magnocellular and parvocellular visual pathways, thought to be abnormal in schizophrenia, were also investigated in binocular rivalry. The binocular rivalry stimuli used in this third study (Chapter 4) were altered to bias the task for one of these two pathways. Participants with schizophrenia recorded slower binocular rivalry rates than controls in both binocular rivalry tasks. Using a ‘within subject design’, binocular rivalry data were compared to data collected from a backwardmasking task widely accepted to bias both these pathways. Based on these data, a model of binocular rivalry, based on the magnocellular and parvocellular pathways that contribute to the dorsal and ventral visual streams, was developed. Binocular rivalry rates were compared with performance on the Benton’s Judgment of Line Orientation task, in individuals with schizophrenia compared to healthy controls (Chapter 5). The Benton’s Judgment of Line Orientation task is widely accepted to be processed within the right cerebral hemisphere, making it an appropriate task to investigate the role of the cerebral hemispheres in binocular rivalry, and to investigate the inter-hemispheric switching hypothesis of binocular rivalry proposed by Pettigrew and Miller (1998, 2003). The data were suggestive of intra-hemispheric rather than an inter-hemispheric visual processing in binocular rivalry. Neurotransmitter involvement in binocular rivalry, backward masking and Judgment of Line Orientation in schizophrenia were investigated using a genetic indicator of dopamine receptor distribution and functioning; the presence of the Taq1 allele of the dopamine D2 receptor (DRD2) receptor gene. This final study (Chapter 6) explored whether the presence of the Taq1 allele of the DRD2 receptor gene, and thus, by inference the distribution of dopamine receptors and dopamine function, accounted for the large individual variation in binocular rivalry. The presence of the Taq1 allele was associated with slower binocular rivalry rates or poorer performance in the backward masking and Judgment of Line Orientation tasks seen in the group with schizophrenia. This Thesis has contributed to what is known about binocular rivalry in schizophrenia. Consistently slower binocular rivalry rates were observed in participants with schizophrenia, indicating abnormally-slow visual processing in this group. These data support previous studies reporting visual processing abnormalities in schizophrenia and suggest that a slow binocular rivalry rate is not a feature specific to bipolar disorder, but may be a feature of disorders with psychotic features generally. The contributions of the magnocellular or dorsal pathways and parvocellular or ventral pathways to binocular rivalry, and therefore to perceptual awareness, were investigated. The data presented supported the view that the magnocellular system initiates perceptual awareness of an image and the parvocellular system maintains the perception of the image, making it available to higher level processing occurring within the cortical hemispheres. Abnormal magnocellular and parvocellular processing may both contribute to perceptual disturbances that ultimately contribute to the cognitive dysfunction associated with schizophrenia. An alternative model of binocular rivalry based on these observations was proposed.
Resumo:
Diet Induced Thermogenesis (DIT) is the energy expended consequent to meal consumption, and reflects the energy required for the processing and digestion of food consumed throughout each day. Although DIT is the total energy expended across a day in digestive processes to a number of meals, most studies measure thermogenesis in response to a single meal (Meal Induced Thermogenesis: MIT) as a representation of an individual’s thermogenic response to acute food ingestion. As a component of energy expenditure, DIT may have a contributing role in weight gain and weight loss. While the evidence is inconsistent, research has tended to reveal a suppressed MIT response in obese compared to lean individuals, which identifies individuals with an efficient storage of food energy, hence a greater tendency for weight gain. Appetite is another factor regulating body weight through its influence on energy intake. Preliminary research has shown a potential link between MIT and postprandial appetite as both are responses to food ingestion and have a similar response dependent upon the macronutrient content of food. There is a growing interest in understanding how both MIT and appetite are modified with changes in diet, activity levels and body size. However, the findings from MIT research have been highly inconsistent, potentially due to the vastly divergent protocols used for its measurement. Therefore, the main theme of this thesis was firstly, to address some of the methodological issues associated with measuring MIT. Additionally this thesis aimed to measure postprandial appetite simultaneously to MIT to test for any relationships between these meal-induced variables and to assess changes that occur in MIT and postprandial appetite during periods of energy restriction (ER) and following weight loss. Two separate studies were conducted to achieve these aims. Based on the increasing prevalence of obesity, it is important to develop accurate methodologies for measuring the components potentially contributing to its development and to understand the variability within these variables. Therefore, the aim of Study One was to establish a protocol for measuring the thermogenic response to a single test meal (MIT), as a representation of DIT across a day. This was done by determining the reproducibility of MIT with a continuous measurement protocol and determining the effect of measurement duration. The benefit of a fixed resting metabolic rate (RMR), which is a single measure of RMR used to calculate each subsequent measure of MIT, compared to separate baseline RMRs, which are separate measures of RMR measured immediately prior to each MIT test meal to calculate each measure of MIT, was also assessed to determine the method with greater reproducibility. Subsidiary aims were to measure postprandial appetite simultaneously to MIT, to determine its reproducibility between days and to assess potential relationships between these two variables. Ten healthy individuals (5 males, 5 females, age = 30.2 ± 7.6 years, BMI = 22.3 ± 1.9 kg/m2, %Fat Mass = 27.6 ± 5.9%) undertook three testing sessions within a 1-4 week time period. During the first visit, participants had their body composition measured using DXA for descriptive purposes, then had an initial 30-minute measure of RMR to familiarise them with the testing and to be used as a fixed baseline for calculating MIT. During the second and third testing sessions, MIT was measured. Measures of RMR and MIT were undertaken using a metabolic cart with a ventilated hood to measure energy expenditure via indirect calorimetry with participants in a semi-reclined position. The procedure on each MIT test day was: 1) a baseline RMR measured for 30 minutes, 2) a 15-minute break in the measure to consume a standard 576 kcal breakfast (54.3% CHO, 14.3% PRO, 31.4% FAT), comprising muesli, milk toast, butter, jam and juice, and 3) six hours of measuring MIT with two, ten-minute breaks at 3 and 4.5 hours for participants to visit the bathroom. On the MIT test days, pre and post breakfast then at 45-minute intervals, participants rated their subjective appetite, alertness and comfort on visual analogue scales (VAS). Prior to each test, participants were required to be fasted for 12 hours, and have undertaken no high intensity physical activity for the previous 48 hours. Despite no significant group changes in the MIT response between days, individual variability was high with an average between-day CV of 33%, which was not significantly improved by the use of a fixed RMR to 31%. The 95% limits of agreements which ranged from 9.9% of energy intake (%EI) to -10.7%EI with the baseline RMRs and between 9.6%EI to -12.4%EI with the fixed RMR, indicated very large changes relative to the size of the average MIT response (MIT 1: 8.4%EI, 13.3%EI; MIT 2: 8.8%EI, 14.7%EI; baseline and fixed RMRs respectively). After just three hours, the between-day CV with the baseline RMR was 26%, which may indicate an enhanced MIT reproducibility with shorter measurement durations. On average, 76, 89, and 96% of the six-hour MIT response was completed within three, four and five hours, respectively. Strong correlations were found between MIT at each of these time points and the total six-hour MIT (range for correlations r = 0.990 to 0.998; P < 0.01). The reproducibility of the proportion of the six-hour MIT completed at 3, 4 and 5 hours was reproducible (between-day CVs ≤ 8.5%). This indicated the suitability to use shorter durations on repeated occasions and a similar percent of the total response to be completed. There was a lack of strong evidence of any relationship between the magnitude of the MIT response and subjective postprandial appetite. Given a six-hour protocol places a considerable burden on participants, these results suggests that a post-meal measurement period of only three hours is sufficient to produce valid information on the metabolic response to a meal. However while there was no mean change in MIT between test days, individual variability was large. Further research is required to better understand which factors best explain the between-day variability in this physiological measure. With such a high prevalence of obesity, dieting has become a necessity to reduce body weight. However, during periods of ER, metabolic and appetite adaptations can occur which may impede weight loss. Understanding how metabolic and appetite factors change during ER and weight loss is important for designing optimal weight loss protocols. The purpose of Study Two was to measure the changes in the MIT response and subjective postprandial appetite during either continuous (CONT) or intermittent (INT) ER and following post diet energy balance (post-diet EB). Thirty-six obese male participants were randomly assigned to either the CONT (Age = 38.6 ± 7.0 years, weight = 109.8 ± 9.2 kg, % fat mass = 38.2 ± 5.2%) or INT diet groups (Age = 39.1 ± 9.1 years, weight = 107.1 ± 12.5 kg, % fat mass = 39.6 ± 6.8%). The study was divided into three phases: a four-week baseline (BL) phase where participants were provided with a diet to maintain body weight, an ER phase lasting either 16 (CONT) or 30 (INT) weeks, where participants were provided with a diet which supplied 67% of their energy balance requirements to induce weight loss and an eight-week post-diet EB phase, providing a diet to maintain body weight post weight loss. The INT ER phase was delivered as eight, two-week blocks of ER interspersed with two-week blocks designed to achieve weight maintenance. Energy requirements for each phase were predicted based on measured RMR, and adjusted throughout the study to account for changes in RMR. All participants completed MIT and appetite tests during BL and the ER phase. Nine CONT and 15 INT participants completed the post-diet EB MIT and 14 INT and 15 CONT participants completed the post-diet EB appetite tests. The MIT test day protocol was as follows: 1) a baseline RMR measured for 30 minutes, 2) a 15-minute break in the measure to consume a standard breakfast meal (874 kcal, 53.3% CHO, 14.5% PRO, 32.2% FAT), and 3) three hours of measuring MIT. MIT was calculated as the energy expenditure above the pre-meal RMR. Appetite test days were undertaken on a separate day using the same 576 kcal breakfast used in Study One. VAS were used to assess appetite pre and post breakfast, at one hour post breakfast then a further three times at 45-minute intervals. Appetite ratings were calculated for hunger and fullness as both the intra-meal change in appetite and the AUC. The three-hour MIT response at BL, ER and post-diet EB respectively were 5.4 ± 1.4%EI, 5.1 ± 1.3%EI and 5.0 ± 0.8%EI for the CONT group and 4.4 ± 1.0%EI, 4.7 ± 1.0%EI and 4.8 ± 0.8%EI for the INT group. Compared to BL, neither group had significant changes in their MIT response during ER or post-diet EB. There were no significant time by group interactions (p = 0.17) indicating a similar response to ER and post-diet EB in both groups. Contrary to what was hypothesised, there was a significant increase in postprandial AUC fullness in response to ER in both groups (p < 0.05). However, there were no significant changes in any of the other postprandial hunger or fullness variables. Despite no changes in MIT in both the CONT or INT group in response to ER or post-diet EB and only a minor increase in postprandial AUC fullness, the individual changes in MIT and postprandial appetite in response to ER were large. However those with the greatest MIT changes did not have the greatest changes in postprandial appetite. This study shows that postprandial appetite and MIT are unlikely to be altered during ER and are unlikely to hinder weight loss. Additionally, there were no changes in MIT in response to weight loss, indicating that body weight did not influence the magnitude of the MIT response. There were large individual changes in both variables, however further research is required to determine whether these changes were real compensatory changes to ER or simply between-day variation. Overall, the results of this thesis add to the current literature by showing the large variability of continuous MIT measurements, which make it difficult to compare MIT between groups and in response to diet interventions. This thesis was able to provide evidence to suggest that shorter measures may provide equally valid information about the total MIT response and can therefore be utilised in future research in order to reduce the burden of long measurements durations. This thesis indicates that MIT and postprandial subjective appetite are most likely independent of each other. This thesis also shows that, on average, energy restriction was not associated with compensatory changes in MIT and postprandial appetite that would have impeded weight loss. However, the large inter-individual variability supports the need to examine individual responses in more detail.
Resumo:
Bicycle commuting has the potential to be an effective contributing solution to address some of modern society’s biggest issues, including cardiovascular disease, anthropogenic climate change and urban traffic congestion. However, individuals shifting from a passive to an active commute mode may be increasing their potential for air pollution exposure and the associated health risk. This project, consisting of three studies, was designed to investigate the health effects of bicycle commuters in relation to air pollution exposure, in a major city in Australia (Brisbane). The aims of the three studies were to: 1) examine the relationship of in-commute air pollution exposure perception, symptoms and risk management; 2) assess the efficacy of commute re-routing as a risk management strategy by determining the exposure potential profile of ultrafine particles along commute route alternatives of low and high proximity to motorised traffic; and, 3) evaluate the feasibility of implementing commute re-routing as a risk management strategy by monitoring ultrafine particle exposure and consequential physiological response from using commute route alternatives based on real-world circumstances; 3) investigate the potential of reducing exposure to ultrafine particles (UFP; < 0.1 µm) during bicycle commuting by lowering proximity to motorised traffic with real-time air pollution and acute inflammatory measurements in healthy individuals using their typical, and an alternative to their typical, bicycle commute route. The methods of the three studies included: 1) a questionnaire-based investigation with regular bicycle commuters in Brisbane, Australia. Participants (n = 153; age = 41 ± 11 yr; 28% female) reported the characteristics of their typical bicycle commute, along with exposure perception and acute respiratory symptoms, and amenability for using a respirator or re-routing their commute as risk management strategies; 2) inhaled particle counts measured along popular pre-identified bicycle commute route alterations of low (LOW) and high (HIGH) motorised traffic to the same inner-city destination at peak commute traffic times. During commute, real-time particle number concentration (PNC; mostly in the UFP range) and particle diameter (PD), heart and respiratory rate, geographical location, and meteorological variables were measured. To determine inhaled particle counts, ventilation rate was calculated from heart-rate-ventilation associations, produced from periodic exercise testing; 3) thirty-five healthy adults (mean ± SD: age = 39 ± 11 yr; 29% female) completed two return trips of their typical route (HIGH) and a pre-determined altered route of lower proximity to motorised traffic (LOW; determined by the proportion of on-road cycle paths). Particle number concentration (PNC) and diameter (PD) were monitored in real-time in-commute. Acute inflammatory indices of respiratory symptom incidence, lung function and spontaneous sputum (for inflammatory cell analyses) were collected immediately pre-commute, and one and three hours post-commute. The main results of the three studies are that: 1) healthy individuals reported a higher incidence of specific acute respiratory symptoms in- and post- (compared to pre-) commute (p < 0.05). The incidence of specific acute respiratory symptoms was significantly higher for participants with respiratory disorder history compared to healthy participants (p < 0.05). The incidence of in-commute offensive odour detection, and the perception of in-commute air pollution exposure, was significantly lower for participants with smoking history compared to healthy participants (p < 0.05). Females reported significantly higher incidence of in-commute air pollution exposure perception and other specific acute respiratory symptoms, and were more amenable to commute re-routing, compared to males (p < 0.05). Healthy individuals have indicated a higher incidence of acute respiratory symptoms in- and post- (compared to pre-) bicycle commuting, with female gender and respiratory disorder history indicating a comparably-higher susceptibility; 2) total mean PNC of LOW (compared to HIGH) was reduced (1.56 x e4 ± 0.38 x e4 versus 3.06 x e4 ± 0.53 x e4 ppcc; p = 0.012). Total estimated ventilation rate did not vary significantly between LOW and HIGH (43 ± 5 versus 46 ± 9 L•min; p = 0.136); however, due to total mean PNC, accumulated inhaled particle counts were 48% lower in LOW, compared to HIGH (7.6 x e8 ± 1.5 x e8 versus 14.6 x e8 ± 1.8 x e8; p = 0.003); 3) LOW resulted in a significant reduction in mean PNC (1.91 x e4 ± 0.93 x e4 ppcc vs. 2.95 x e4 ± 1.50 x e4 ppcc; p ≤ 0.001). Commute distance and duration were not significantly different between LOW and HIGH (12.8 ± 7.1 vs. 12.0 ± 6.9 km and 44 ± 17 vs. 42 ± 17 mins, respectively). Besides incidence of in-commute offensive odour detection (42 vs. 56 %; p = 0.019), incidence of dust and soot observation (33 vs. 47 %; p = 0.038) and nasopharyngeal irritation (31 vs. 41 %; p = 0.007), acute inflammatory indices were not significantly associated to in-commute PNC, nor were these indices reduced with LOW compared to HIGH. The main conclusions of the three studies are that: 1) the perception of air pollution exposure levels and the amenability to adopt exposure risk management strategies where applicable will aid the general population in shifting from passive, motorised transport modes to bicycle commuting; 2) for bicycle commuting at peak morning commute times, inhaled particle counts and therefore cardiopulmonary health risk may be substantially reduced by decreasing exposure to motorised traffic, which should be considered by both bicycle commuters and urban planners; 3) exposure to PNC, and the incidence of offensive odour and nasopharyngeal irritation, can be significantly reduced when utilising a strategy of lowering proximity to motorised traffic whilst bicycle commuting, without significantly increasing commute distance or duration, which may bring important benefits for both healthy and susceptible individuals. In summary, the findings from this project suggests that bicycle commuters can significantly lower their exposure to ultrafine particle emissions by varying their commute route to reduce proximity to motorised traffic and associated combustion emissions without necessarily affecting their time of commute. While the health endpoints assessed with healthy individuals were not indicative of acute health detriment, individuals with pre-disposing physiological-susceptibility may benefit considerably from this risk management strategy – a necessary research focus with the contemporary increased popularity of both promotion and participation in bicycle commuting.
Resumo:
Background: Job dissatisfaction, stress and burnout is linked to high rates of nurses leaving the profession, poor morale, poor patient outcomes and increased financial expenditure. Haemodialysis nurses find their work satisfying although it can be stressful. Little is known, however, about job satisfaction, stress or burnout levels of haemodialysis nurses in Australia and New Zealand. Aims: To assess the current levels of job satisfaction, stress, burnout and nurses’ perception of the haemodialysis work environment. Methods: An observational study involved a cross-sectional sample of 417 registered or enrolled nurses working in Australian or New Zealand haemodialysis units. Data was collected using an on-line questionnaire containing demographic and work characteristics as well as validated measures of job satisfaction, stress, burnout and the work environment Results: 74% of respondents were aged over 40 and 75% had more than six years of haemodialysis nursing experience. Job satisfaction levels were comparable to studies in other practice areas with higher satisfaction derived from professional status and interactions with colleagues. Despite nurses viewing their work environment favourably, moderate levels of burnout were noted with frequent stressors related to workload and patient death and dying. Interestingly there were no differences found between the type or location of dialysis unit. Conclusion: Despite acceptable levels of job satisfaction and burnout, stress with workloads and facets of patient care were found. Understanding the factors that contribute to job satisfaction, stress and burnout can impact the healthcare system through decreased costs by retaining valued staff and through improved patient care.
Resumo:
Objectives: To identify and appraise the literature concerning nurse-administered procedural sedation and analgesia in the cardiac catheter laboratory. Design and data sources: An integrative review method was chosen for this study. MEDLINE and CINAHL databases as well as The Cochrane Database of Systematic Reviews and the Joanna Briggs Institute were searched. Nineteen research articles and three clinical guidelines were identified. Results: The authors of each study reported nurse-administered sedation in the CCL is safe due to the low incidence of complications. However, a higher percentage of deeply sedated patients were reported to experience complications than moderately sedated patients. To confound this issue, one clinical guideline permits deep sedation without an anaesthetist present, while others recommend against it. All clinical guidelines recommend nurses are educated about sedation concepts. Other findings focus on pain and discomfort and the cost-savings of nurse-administered sedation, which are associated with forgoing anaesthetic services. Conclusions: Practice is varied due to limitations in the evidence and inconsistent clinical practice guidelines. Therefore, recommendations for research and practice have been made. Research topics include determining how and in which circumstances capnography can be used in the CCL, discerning the economic impact of sedation-related complications and developing a set of objectives for nursing education about sedation. For practice, if deep sedation is administered without an anaesthetist present, it is essential nurses are adequately trained and have access to vital equipment such as capnography to monitor ventilation because deeply sedated patients are more likely to experience complications related to sedation. These initiatives will go some way to ensuring patients receiving nurse-administered procedural sedation and analgesia for a procedure in the cardiac catheter laboratory are cared for using consistent, safe and evidence-based practices.
Resumo:
The cardiac catheterisation laboratory (CCL) is a specialised medical radiology facility where both chronic-stable and life-threatening cardiovascular illness is evaluated and treated. Although there are many potential sources of discomfort and distress associated with procedures performed in the CCL, a general anaesthetic is not usually required. For this reason, an anaesthetist is not routinely assigned to the CCL. Instead, to manage pain, discomfort and anxiety during the procedure, nurses administer a combination of sedative and analgesic medications according to direction from the cardiologist performing the procedure. This practice is referred to as nurse-administered procedural sedation and analgesia (PSA). While anecdotal evidence suggested that nurse-administered PSA was commonly used in the CCL, it was clear from the limited information available that current nurse-led PSA administration and monitoring practices varied and that there was contention around some aspects of practice including the type of medications that were suitable to be used and the depth of sedation that could be safely induced without an anaesthetist present. The overall aim of the program of research presented in this thesis was to establish an evidence base for nurse-led sedation practices in the CCL context. A sequential mixed methods design was used over three phases. The objective of the first phase was to appraise the existing evidence for nurse-administered PSA in the CCL. Two studies were conducted. The first study was an integrative review of empirical research studies and clinical practice guidelines focused on nurse-administered PSA in the CCL as well as in other similar procedural settings. This was the first review to systematically appraise the available evidence supporting the use of nurse-administered PSA in the CCL. A major finding was that, overall, nurse-administered PSA in the CCL was generally deemed to be safe. However, it was concluded from the analysis of the studies and the guidelines that were included in the review, that the management of sedation in the CCL was impacted by a variety of contextual factors including local hospital policy, workforce constraints and cardiologists’ preferences for the type of sedation used. The second study in the first phase was conducted to identify a sedation scale that could be used to monitor level of sedation during nurse-administered PSA in the CCL. It involved a structured literature review and psychometric analysis of scale properties. However, only one scale was found that was developed specifically for the CCL, which had not undergone psychometric testing. Several weaknesses were identified in its item structure. Other sedation scales that were identified were developed for the ICU. Although these scales have demonstrated validity and reliability in the ICU, weaknesses in their item structure precluded their use in the CCL. As findings indicated that no existing sedation scale should be applied to practice in the CCL, recommendations for the development and psychometric testing of a new sedation scale were developed. The objective of the second phase of the program of research was to explore current practice. Three studies were conducted in this phase using both quantitative and qualitative research methods. The first was a qualitative explorative study of nurses’ perceptions of the issues and challenges associated with nurse-administered PSA in the CCL. Major themes emerged from analysis of the qualitative data regarding the lack of access to anaesthetists, the limitations of sedative medications, the barriers to effective patient monitoring and the impact that the increasing complexity of procedures has on patients' sedation requirements. The second study in Phase Two was a cross-sectional survey of nurse-administered PSA practice in Australian and New Zealand CCLs. This was the first study to quantify the frequency that nurse-administered PSA was used in the CCL setting and to characterise associated nursing practices. It was found that nearly all CCLs utilise nurse-administered PSA (94%). Of note, by characterising nurse-administered PSA in Australian and New Zealand CCLs, several strategies to improve practice, such as setting up protocols for patient monitoring and establishing comprehensive PSA education for CCL nurses, were identified. The third study in Phase Two was a matched case-control study of risk factors for impaired respiratory function during nurse-administered PSA in the CCL setting. Patients with acute illness were found to be nearly twice as likely to experience impaired respiratory function during nurse-administered PSA (OR=1.78; 95%CI=1.19-2.67; p=0.005). These significant findings can now be used to inform prospective studies investigating the effectiveness of interventions for impaired respiratory function during nurse-administered PSA in the CCL. The objective of the third and final phase of the program of research was to develop recommendations for practice. To achieve this objective, a synthesis of findings from the previous phases of the program of research informed a modified Delphi study, which was conducted to develop a set of clinical practice guidelines for nurse-administered PSA in the CCL. The clinical practice guidelines that were developed set current best practice standards for pre-procedural patient assessment and risk screening practices as well as the intra and post-procedural patient monitoring practices that nurses who administer PSA in the CCL should undertake in order to deliver safe, evidence-based and consistent care to the many patients who undergo procedures in this setting. In summary, the mixed methods approach that was used clearly enabled the research objectives to be comprehensively addressed in an informed sequential manner, and, as a consequence, this thesis has generated a substantial amount of new knowledge to inform and support nurse-led sedation practice in the CCL context. However, a limitation of the research to note is that the comprehensive appraisal of the evidence conducted, combined with the guideline development process, highlighted that there were numerous deficiencies in the evidence base. As such, rather than being based on high-level evidence, many of the recommendations for practice were produced by consensus. For this reason, further research is required in order to ascertain which specific practices result in the most optimal patient and health service outcomes. Therefore, along with necessary guideline implementation and evaluation projects, post-doctoral research is planned to follow up on the research gaps identified, which are planned to form part of a continuing program of research in this field.
Resumo:
Background: In sub-tropical and tropical Queensland, a legacy of poor housing design,minimal building regulations with few compliance measures, an absence of post-construction performance evaluation and various social and market factors has led to a high and growing penetration of, and reliance on, air conditioners to provide thermal comfort for occupants. The pervasive reliance on air conditioners has arguably impacted on building forms, changed cultural expectations of comfort and social practices for achieving comfort, and may have resulted in a loss of skills in designing and constructing high performance building envelopes. Aim: The aim of this paper is to report on initial outcomes of a project that sought to determine how the predicted building thermal performance of twenty-five houses in subtropical and tropical Queensland compared with objective performance measures and comfort performance as perceived by occupants. The purpose of the project was to shed light on the role of various supply chain agents in the realisation of thermal performance outcomes. Methodology: The case study methodology embraced a socio-technical approach incorporating building science and sociology. Building simulation was used to model thermal performance under controlled comfort assumptions and adaptive comfort conditions. Actual indoor climate conditions were measured by temperature and relative humidity sensors placed throughout each house, whilst occupants’ expectations of thermal comfort and their self-reported behaviours were gathered through semi-structured interviews and periodic comfort surveys. Thermal imaging and air infiltration tests, along with building design documents, were analysed to evaluate the influence of various supply chain agents on the actual performance outcomes. Results: The results clearly show that in the housing supply chain – from designer to constructor to occupant – there is limited understanding from each agent of their role in contributing to, or inhibiting, occupants’ comfort.
Resumo:
The appropriateness of applying drink driving legislation to motorcycle riding has been questioned as there may be fundamental differences in the effects of alcohol on driving and motorcycling. It has been suggested that alcohol may redirect riders’ focus from higher-order cognitive skills such as cornering, judgement and hazard perception, to more physical skills such as maintaining balance. To test this hypothesis, the effects of low doses of alcohol on balance ability were investigated in a laboratory setting. The static balance of twenty experienced and twenty novice riders was measured while they performed either no secondary task, a visual (search) task, or a cognitive (arithmetic) task following the administration of alcohol (0%, 0.02%, and 0.05% BAC). Subjective ratings of intoxication and balance impairment increased in a dose-dependent manner in both novice and experienced motorcycle riders, while a BAC of 0.05%, but not 0.02%, was associated with impairments in static balance ability. This balance impairment was exacerbated when riders performed a cognitive, but not a visual, secondary task. Likewise, 0.05% BAC was associated with impairments in novice and experienced riders’ performance of a cognitive, but not a visual, secondary task, suggesting that interactive processes underlie balance and cognitive task performance. There were no observed differences between novice vs. experienced riders on static balance and secondary task performance, either alone or in combination. Implications for road safety and future ‘drink riding’ policy considerations are discussed.
Resumo:
Introduction: Within the context of road safety it is important that workload (the portion of a driver’s resources expended to perform a task) remains at a manageable level, preventing overloading and consequently performance decrements. Motorcyclists are over represented in crash statistics where the vehicle operator has a positive, low blood alcohol concentration (BAC) (e.g., 0.05%). The NASA task load index (NASA-TLX) comprises sub-scales that purportedly assess different aspects of subjective workload. It was hypothesized that, compared to a zero BAC condition, low BACs would be associated with increases in workload ratings, and decrements in riding performance. Method: Forty participants (20 novice, 20 experienced) completed simulated motorcycle rides in urban and rural scenarios under low dose BAC conditions (0.00%, 0.02%, 0.05% BAC), while completing a safety relevant peripheral detection task (PDT). Six sub-scales of the NASA-TLX were completed after each ride. Riding performance was assessed using standard deviation of lateral position (SDLP). Hazard perception was assessed by response time to the PDT. Results: Riding performance and hazard perception were affected by alcohol. There was a significant increase in SDLP in the urban scenario and of PDT reaction time in the rural scenario under 0.05% BAC compared to 0.00% BAC. Overall NASA-TLX score increased at 0.02% and 0.05% BAC in the urban environment only, with a trend for novices to rate workload higher than experienced riders. There was a significant main effect of sub-scale on workload ratings in both the urban and rural scenarios. Discussion: 0.05% BAC was associated with decrements in riding performance in the urban environment, decrements in hazard perception in the rural environment, and increases in overall ratings of subjective workload in the urban environment. The workload sub-scales of the NASA-TLX appear to be measuring distinct aspects of motorcycle riding-related workload. Issues of workload and alcohol impaired riding performance are discussed.
The health effects of temperature : current estimates, future projections, and adaptation strategies
Resumo:
Climate change is expected to be one of the biggest global health threats in the 21st century. In response to changes in climate and associated extreme events, public health adaptation has become imperative. This thesis examined several key issues in this emerging research field. The thesis aimed to identify the climate-health (particularly temperature-health) relationships, then develop quantitative models that can be used to project future health impacts of climate change, and therefore help formulate adaptation strategies for dealing with climate-related health risks and reducing vulnerability. The research questions addressed by this thesis were: (1) What are the barriers to public health adaptation to climate change? What are the research priorities in this emerging field? (2) What models and frameworks can be used to project future temperature-related mortality under different climate change scenarios? (3) What is the actual burden of temperature-related mortality? What are the impacts of climate change on future burden of disease? and (4) Can we develop public health adaptation strategies to manage the health effects of temperature in response to climate change? Using a literature review, I discussed how public health organisations should implement and manage the process of planned adaptation. This review showed that public health adaptation can operate at two levels: building adaptive capacity and implementing adaptation actions. However, there are constraints and barriers to adaptation arising from uncertainty, cost, technologic limits, institutional arrangements, deficits of social capital, and individual perception of risks. The opportunities for planning and implementing public health adaptation are reliant on effective strategies to overcome likely barriers. I proposed that high priorities should be given to multidisciplinary research on the assessment of potential health effects of climate change, projections of future health impacts under different climate and socio-economic scenarios, identification of health cobenefits of climate change policies, and evaluation of cost-effective public health adaptation options. Heat-related mortality is the most direct and highly-significant potential climate change impact on human health. I thus conducted a systematic review of research and methods for projecting future heat-related mortality under different climate change scenarios. The review showed that climate change is likely to result in a substantial increase in heatrelated mortality. Projecting heat-related mortality requires understanding of historical temperature-mortality relationships, and consideration of future changes in climate, population and acclimatisation. Further research is needed to provide a stronger theoretical framework for mortality projections, including a better understanding of socioeconomic development, adaptation strategies, land-use patterns, air pollution and mortality displacement. Most previous studies were designed to examine temperature-related excess deaths or mortality risks. However, if most temperature-related deaths occur in the very elderly who had only a short life expectancy, then the burden of temperature on mortality would have less public health importance. To guide policy decisions and resource allocation, it is desirable to know the actual burden of temperature-related mortality. To achieve this, I used years of life lost to provide a new measure of health effects of temperature. I conducted a time-series analysis to estimate years of life lost associated with changes in season and temperature in Brisbane, Australia. I also projected the future temperaturerelated years of life lost attributable to climate change. This study showed that the association between temperature and years of life lost was U-shaped, with increased years of life lost on cold and hot days. The temperature-related years of life lost will worsen greatly if future climate change goes beyond a 2 °C increase and without any adaptation to higher temperatures. The excess mortality during prolonged extreme temperatures is often greater than the predicted using smoothed temperature-mortality association. This is because sustained period of extreme temperatures produce an extra effect beyond that predicted by daily temperatures. To better estimate the burden of extreme temperatures, I estimated their effects on years of life lost due to cardiovascular disease using data from Brisbane, Australia. The results showed that the association between daily mean temperature and years of life lost due to cardiovascular disease was U-shaped, with the lowest years of life lost at 24 °C (the 75th percentile of daily mean temperature in Brisbane), rising progressively as temperatures become hotter or colder. There were significant added effects of heat waves, but no added effects of cold spells. Finally, public health adaptation to hot weather is necessary and pressing. I discussed how to manage the health effects of temperature, especially with the context of climate change. Strategies to minimise the health effects of high temperatures and climate change can fall into two categories: reducing the heat exposure and managing the health effects of high temperatures. However, policy decisions need information on specific adaptations, together with their expected costs and benefits. Therefore, more research is needed to evaluate cost-effective adaptation options. In summary, this thesis adds to the large body of literature on the impacts of temperature and climate change on human health. It improves our understanding of the temperaturehealth relationship, and how this relationship will change as temperatures increase. Although the research is limited to one city, which restricts the generalisability of the findings, the methods and approaches developed in this thesis will be useful to other researchers studying temperature-health relationships and climate change impacts. The results may be helpful for decision-makers who develop public health adaptation strategies to minimise the health effects of extreme temperatures and climate change.
Resumo:
Purpose This study investigated the efficacy and safety of cryotherapy, in the form of frozen gel gloves, in relation to docetaxel-induced hand and fingernail toxicities. Patients and methods After piloting with 21 patients, a consecutive series sample of patients (n=53) prescribed docetaxel every three weeks, for a minimum of three cycles, was enrolled in this randomised control trial. Participants acted as their own control, with the frozen gel glove worn on one randomised hand for 15 minutes prior to infusion, for the duration of the infusion, and for 15 minutes of after completion of treatment. Hand and nail toxicities were evaluated by two blinded assessors according to CTCAE.v4 criteria. To assess the potential for cross-infection of multi-use gloves, microbial culture and sensitivity swabs were taken of each glove at every tenth use. Results Of the 53 participants enrolled in the main study, 21 provided evaluable data. There was a 60% withdrawal rate due to patient discomfort with the intervention. The mean incidence and severity of toxicities in all evaluable cycles in control and intervention hands respectively were erythroderma Grade 1 (5%/5%); nail discolouration Grade 1 (81%/67%); nail loss Grade 1 (19%/19%) and nail ridging Grade 1 (57%/57%). No significant differences were determined between hand conditions in terms of time to event, nor in terms of toxicity in gloved and non-gloved hands. Conclusion While cryotherapy in the form of frozen gloves for the cutaneous toxicities associated with docetaxel is safe, its limited efficacy, patient discomfort and some logistical issues preclude its use in our clinical setting.
Resumo:
In 2012, Queensland University of Technology (QUT) committed to the massive project of revitalizing its Bachelor of Science (ST01) degree. Like most universities in Australia, QUT has begun work to align all courses by 2015 to the requirements of the updated Australian Qualifications Framework (AQF) which is regulated by the Tertiary Education Quality and Standards Agency (TEQSA). From the very start of the redesigned degree program, students approach scientific study with an exciting mix of theory and highly topical real world examples through their chosen “grand challenge.” These challenges, Fukushima and nuclear energy for example, are the lenses used to explore science and lead to 21st century learning outcomes for students. For the teaching and learning support staff, our grand challenge is to expose all science students to multidisciplinary content with a strong emphasis on embedding information literacies into the curriculum. With ST01, QUT is taking the initiative to rethink not only content but how units are delivered and even how we work together between the faculty, the library and learning and teaching support. This was the desired outcome but as we move from design to implementation, has this goal been achieved? A main component of the new degree is to ensure scaffolding of information literacy skills throughout the entirety of the three year course. However, with the strong focus on problem-based learning and group work skills, many issues arise both for students and lecturers. A move away from a traditional lecture style is necessary but impacts on academics’ workload and comfort levels. Therefore, academics in collaboration with librarians and other learning support staff must draw on each others’ expertise to work together to ensure pedagogy, assessments and targeted classroom activities are mapped within and between units. This partnership can counteract the tendency of isolated, unsupported academics to concentrate on day-to-day teaching at the expense of consistency between units and big picture objectives. Support staff may have a more holistic view of a course or degree than coordinators of individual units, making communication and truly collaborative planning even more critical. As well, due to staffing time pressures, design and delivery of new curriculum is generally done quickly with no option for the designers to stop and reflect on the experience and outcomes. It is vital we take this unique opportunity to closely examine what QUT has and hasn’t achieved to be able to recommend a better way forward. This presentation will discuss these important issues and stumbling blocks, to provide a set of best practice guidelines for QUT and other institutions. The aim is to help improve collaboration within the university, as well as to maximize students’ ability to put information literacy skills into action. As our students embark on their own grand challenges, we must challenge ourselves to honestly assess our own work.
Resumo:
This study presents the largest-known, investigation on discomfort glare with 493 surveys collected from five green buildings in Brisbane, Australia. The study was conducted on full-time employees, working under their everyday lighting conditions, all of whom had no affiliation with the research institution. The survey consisted of a specially tailored questionnaire to assess potential factors relating to discomfort glare. Luminance maps extracted from high dynamic range (HDR) images were used to capture the luminous environment of the occupants. Occupants who experienced glare on their monitor and/or electric glare were excluded from analysis leaving 419 available surveys. Occupants were more sensitive to glare than any of the tested indices accounted for. A new index, the UGP was developed to take into account the scope of results in the investigation. The index is based on a linear transformation of the UGR to calculate a probability of disturbed persons. However all glare indices had some correlation to discomfort, and statistically there was no difference between the DGI, UGR and CGI. The UGP broadly reflects the demographics of the working population in Australia and the new index is applicable to open plan green buildings.