545 resultados para driving while impaired
Resumo:
Alcohol-related driving is a longstanding, serious problem in China (Li, Xie, Nie, & Zhang, 2012). On 1st May, 2011 a national law was introduced to criminalize drunk driving, and imposed serious penalties including jail for driving with a blood alcohol level of above 80mg/100ml. This pilot study, undertaken a year after introduction of the law, sought traffic police officers’ perceptions of drink driving and the practice of breath alcohol testing (BAT) in a large city in Guangdong Province, southern China. A questionnaire survey and semi-structured interviews were used to gain an in-depth understanding of issues relevant to alcohol-related driving. Fifty-five traffic police officers were recruited for the survey and six traffic police officers with a variety of working experience including roadside alcohol breath testing, traffic crash investigation and police resourcing were interviewed individually. The officers were recruited by the first author with the assistance of the staff from Guangdong Institute of Public Health, Centre for Disease Control and Prevention (CDC). Interview participants reported three primary reasons why people drink and drive: 1) being prepared to take the chance of not being apprehended by police; 2) the strong traditional Chinese drinking culture; and 3) insufficient public awareness about the harmfulness of drink driving. Problems associated with the process of breath alcohol testing (BAT) were described and fit broadly into two categories: resourcing and avoiding detection. It was reported that there were insufficient traffic police officers to conduct routine traffic policing, including alcohol testing. Police BAT equipment was considered sufficient for routine traffic situations but not highway traffic operations. Local media and posters are used by the Public Security Bureau which is responsible for education about safe driving but participants thought that the education campaigns are limited in scope. Participants also described detection avoidance strategies used by drivers including: changing route; ignoring a police instruction to stop; staying inside the vehicle with windows and doors locked to avoid being tested; intentionally not performing breath tests correctly; and arguing with officers. This pilot study provided important insights from traffic police in one Chinese city which suggest there may be potential unintended effects of introducing more severe penalties including a range of strategies reportedly used by drivers to avoid detection. Recommendations for future research include a larger study to confirm these findings and examine the training and education of drivers; the focus and reach of publicity; and possible resource needs to support police enforcement.
Resumo:
Self reported driving behaviour in the occupational driving context has typically been measured through scales adapted from the general driving population (i.e. the Manchester Driver Behaviour Questionnaire (DBQ)). However, research suggests that occupational driving is influenced by unique factors operating within the workplace environment, and thus, a behavioural scale should reflect those behaviours prevalent and unique within the driving context. To overcome this limitation, developed the Occupational Driver Behaviour Questionnaire (ODBQ) which utilises a relevant theoretical model to assess the impact of the broader workplace context on driving behaviour. Although the theoretical argument has been established, research is yet to examine whether the ODBQ or the DBQ is a more sensitive measure of the workplace context. As such, this paper identifies selected organisational factors (i.e. safety climate and role overload) as predictors of the DBQ and the ODBQ and compares the relative predictive value in both models. In undertaking this task, 248 occupational drivers were recruited from a community-oriented nursing population. As predicted, hierarchical regression analyses revealed that the organisational factors accounted for a significantly greater proportion of variance in the ODBQ than the DBQ. These findings offer a number of practical and theoretical applications for occupational driving practice and future research.
Resumo:
An increasing body of research is highlighting the involvement of illicit drugs in many road fatalities. Deterrence theory has been a core conceptual framework underpinning traffic enforcement as well as interventions designed to reduce road fatalities. Essentially the effectiveness of deterrence-based approaches is predicated on perceptions of certainty, severity, and swiftness of apprehension. However, much less is known about how the awareness of legal sanctions can impact upon the effectiveness of deterrence mechanisms and whether promoting such detection methods can increase the deterrent effect. Nevertheless, the implicit assumption is that individuals aware of the legal sanctions will be more deterred. This study seeks to explore how awareness of the testing method impacts upon the effectiveness of deterrence-based interventions and intentions to drug drive again in the future. In total, 161 participants who reported drug driving in the previous six months took part in the current study. The results show that awareness of testing had a small effect upon increasing perceptions of the certainty of apprehension and severity of punishment. However, awareness was not a significant predictor of intentions to drug drive again in the future. Importantly, higher levels of drug use were a significant predictor of intentions to drug drive in the future. Whilst awareness does have a small effect on deterrence variables, the influence of levels of drug use seems to reduce any deterrent effect.
Resumo:
This study used a video-based hazard perception dual task to compare the hazard perception skills of young drivers with middle aged, more experienced drivers and to determine if these skills can be improved with video-based road commentary training. The primary task required the participants to detect and verbally identify immediate hazard on video-based traffic scenarios while concurrently performing a secondary tracking task, simulating the steering of real driving. The results showed that the young drivers perceived fewer immediate hazards (mean = 75.2%, n = 24, 19 females) than the more experienced drivers (mean = 87.5%, n = 8, all females), and had longer hazard perception times, but performed better in the secondary tracking task. After the road commentary training, the mean percentage of hazards detected and identified by the young drivers improved to the level of the experienced drivers and was significantly higher than that of an age and driving experience matched control group. The results will be discussed in the context of psychological theories of hazard perception and in relation to road commentary as an evidence-based training intervention that seems to improve many aspects of unsafe driving behaviour in young drivers.
Resumo:
Eco-driving instructions could reduce fuel consumption to up to 20% (EcoMove, 2010). Participants (N=13) drove an instrumented vehicle (i.e. Toyota Camry 2007) with an automatic transmission. Fuel consumption of the participants were compared before and after they received eco-driving instructions. Participants drove the same vehicle on the same urban route under similar traffic conditions. Results show that, on free flow sections of the track, all participants drove slightly faster (on average, 0.7 Km/h faster), during the lap for which they were instructed to drive in an eco-friendly manner as compared to when they were not given the eco-driving instruction. Suprisingly, eco-driving instructions increased the RPM significantly in most cases. Fuel consumption slightly decreased (6%) after the eco-driving instructions. We have found strong evidence showing that the fuel saving observed in our experiment (urban environment, automatic transmission) fall short of the 20% reduction claimed in other international trials.
Resumo:
This paper describes the development and validation of a PC based MUARC Driver Distraction Test designed to measure simulated driving performance while the driver is performing a secondary task. The paper discusses the logic behind the development of the test, including the principles that were used to guide its design, as well as the results of a pilot validation study. The findings from this study were consistent with previous research and theory and were consistent with those obtained with the LCT. The results did, however, highlight a number of refinements that were necessary to improve the utility of the test.
Resumo:
Eccentric exercise commonly results in muscle damage. The primary sequence of events leading to exercise-induced muscle damage is believed to involve initial mechanical disruption of sarcomeres, followed by impaired excitation-contraction coupling and calcium signaling, and finally, activation of calcium-sensitive degradation pathways. Muscle damage is characterized by ultrastructural changes to muscle architecture, increased muscle proteins and enzymes in the bloodstream, loss of muscular strength and range of motion and muscle soreness. The inflammatory response to exercise-induced muscle damage is characterized by leukocyte infiltration and production of pro-inflammatory cytokines within damaged muscle tissue, systemic release of leukocytes and cytokines, in addition to alterations in leukocyte receptor expression and functional activity. Current evidence suggests that inflammatory responses to muscle damage are dependent on the type of eccentric exercise, previous eccentric loading (repeated bouts), age and gender. Circulating neutrophil counts and systemic cytokine responses are greater after eccentric exercise using a large muscle mass (e.g. downhill running, eccentric cycling) than after other types of eccentric exercise involving a smaller muscle mass. After an initial bout of eccentric exercise, circulating leukocyte counts and cell surface receptor expression are attenuated. Leukocyte and cytokine responses to eccentric exercise are impaired in elderly individuals, while cellular infiltration into skeletal muscle is greater in human females than males after eccentric exercise. Whether alterations in intracellular calcium homeostasis influence inflammatory responses to muscle damage is uncertain. Furthermore, the effects of antioxidant supplements are variable, and the limited data available indicates that anti-inflammatory drugs largely have no influence on inflammatory responses to eccentric exercise. In this review, we compare local versus systemic inflammatory responses, and discuss some of the possible mechanisms regulating the inflammatory responses to exercise-induced muscle damage in humans.
Resumo:
As the world’s population is growing, so is the demand for agricultural products. However, natural nitrogen (N) fixation and phosphorus (P) availability cannot sustain the rising agricultural production, thus, the application of N and P fertilisers as additional nutrient sources is common. It is those anthropogenic activities that can contribute high amounts of organic and inorganic nutrients to both surface and groundwaters resulting in degradation of water quality and a possible reduction of aquatic life. In addition, runoff and sewage from urban and residential areas can contain high amounts of inorganic and organic nutrients which may also affect water quality. For example, blooms of the cyanobacterium Lyngbya majuscula along the coastline of southeast Queensland are an indicator of at least short term decreases of water quality. Although Australian catchments, including those with intensive forms of land use, show in general a low export of nutrients compared to North American and European catchments, certain land use practices may still have a detrimental effect on the coastal environment. Numerous studies are reported on nutrient cycling and associated processes on a catchment scale in the Northern Hemisphere. Comparable studies in Australia, in particular in subtropical regions are, however, limited and there is a paucity in the data, in particular for inorganic and organic forms of nitrogen and phosphorus; these nutrients are important limiting factors in surface waters to promote algal blooms. Therefore, the monitoring of N and P and understanding the sources and pathways of these nutrients within a catchment is important in coastal zone management. Although Australia is the driest continent, in subtropical regions such as southeast Queensland, rainfall patterns have a significant effect on runoff and thus the nutrient cycle at a catchment scale. Increasingly, these rainfall patterns are becoming variable. The monitoring of these climatic conditions and the hydrological response of agricultural catchments is therefore also important to reduce the anthropogenic effects on surface and groundwater quality. This study consists of an integrated hydrological–hydrochemical approach that assesses N and P in an environment with multiple land uses. The main aim is to determine the nutrient cycle within a representative coastal catchment in southeast Queensland, the Elimbah Creek catchment. In particular, the investigation confirms the influence associated with forestry and agriculture on N and P forms, sources, distribution and fate in the surface and groundwaters of this subtropical setting. In addition, the study determines whether N and P are subject to transport into the adjacent estuary and thus into the marine environment; also considered is the effect of local topography, soils and geology on N and P sources and distribution. The thesis is structured on four components individually reported. The first paper determines the controls of catchment settings and processes on stream water, riverbank sediment, and shallow groundwater N and P concentrations, in particular during the extended dry conditions that were encountered during the study. Temporal and spatial factors such as seasonal changes, soil character, land use and catchment morphology are considered as well as their effect on controls over distributions of N and P in surface waters and associated groundwater. A total number of 30 surface and 13 shallow groundwater sampling sites were established throughout the catchment to represent dominant soil types and the land use upstream of each sampling location. Sampling comprises five rounds and was conducted over one year between October 2008 and November 2009. Surface water and groundwater samples were analysed for all major dissolved inorganic forms of N and for total N. Phosphorus was determined in the form of dissolved reactive P (predominantly orthophosphate) and total P. In addition, extracts of stream bank sediments and soil grab samples were analysed for these N and P species. Findings show that major storm events, in particular after long periods of drought conditions, are the driving force of N cycling. This is expressed by higher inorganic N concentrations in the agricultural subcatchment compared to the forested subcatchment. Nitrate N is the dominant inorganic form of N in both the surface and groundwaters and values are significantly higher in the groundwaters. Concentrations in the surface water range from 0.03 to 0.34 mg N L..1; organic N concentrations are considerably higher (average range: 0.33 to 0.85 mg N L..1), in particular in the forested subcatchment. Average NO3-N in the groundwater has a range of 0.39 to 2.08 mg N L..1, and organic N averages between 0.07 and 0.3 mg N L..1. The stream bank sediments are dominated by organic N (range: 0.53 to 0.65 mg N L..1), and the dominant inorganic form of N is NH4-N with values ranging between 0.38 and 0.41 mg N L..1. Topography and soils, however, were not to have a significant effect on N and P concentrations in waters. Detectable phosphorus in the surface and groundwaters of the catchment is limited to several locations typically in the proximity of areas with intensive animal use; in soil and sediments, P is negligible. In the second paper, the stable isotopes of N (14N/15N) and H2O (16O/18O and 2H/H) in surface and groundwaters are used to identify sources of dissolved inorganic and organic N in these waters, and to determine their pathways within the catchment; specific emphasis is placed on the relation of forestry and agriculture. Forestry is predominantly concentrated in the northern subcatchment (Beerburrum Creek) while agriculture is mainly found in the southern subcatchment (Six Mile Creek). Results show that agriculture (horticulture, crops, grazing) is the main source of inorganic N in the surface waters of the agricultural subcatchment, and their isotopic signature shows a close link to evaporation processes that may occur during water storage in farm dams that are used for irrigation. Groundwaters are subject to denitrification processes that may result in reduced dissolved inorganic N concentrations. Soil organic matter delivers most of the inorganic N to the surface water in the forested subcatchment. Here, precipitation and subsequently runoff is the main source of the surface waters. Groundwater in this area is affected by agricultural processes. The findings also show that the catchment can attenuate the effects of anthropogenic land use on surface water quality. Riparian strips of natural remnant vegetation, commonly 50 to 100 m in width, act as buffer zones along the drainage lines in the catchment and remove inorganic N from the soil water before it enters the creek. These riparian buffer zones are common in most agricultural catchments of southeast Queensland and are indicated to reduce the impact of agriculture on stream water quality and subsequently on the estuary and marine environments. This reduction is expressed by a significant decrease in DIN concentrations from 1.6 mg N L..1 to 0.09 mg N L..1, and a decrease in the �15N signatures from upstream surface water locations downstream to the outlet of the agricultural subcatchment. Further testing is, however, necessary to confirm these processes. Most importantly, the amount of N that is transported to the adjacent estuary is shown to be negligible. The third and fourth components of the thesis use a hydrological catchment model approach to determine the water balance of the Elimbah Creek catchment. The model is then used to simulate the effects of land use on the water balance and nutrient loads of the study area. The tool that is used is the internationally widely applied Soil and Water Assessment Tool (SWAT). Knowledge about the water cycle of a catchment is imperative in nutrient studies as processes such as rainfall, surface runoff, soil infiltration and routing of water through the drainage system are the driving forces of the catchment nutrient cycle. Long-term information about discharge volumes of the creeks and rivers do, however, not exist for a number of agricultural catchments in southeast Queensland, and such information is necessary to calibrate and validate numerical models. Therefore, a two-step modelling approach was used to calibrate and validate parameters values from a near-by gauged reference catchment as starting values for the ungauged Elimbah Creek catchment. Transposing monthly calibrated and validated parameter values from the reference catchment to the ungauged catchment significantly improved model performance showing that the hydrological model of the catchment of interest is a strong predictor of the water water balance. The model efficiency coefficient EF shows that 94% of the simulated discharge matches the observed flow whereas only 54% of the observed streamflow was simulated by the SWAT model prior to using the validated values from the reference catchment. In addition, the hydrological model confirmed that total surface runoff contributes the majority of flow to the surface water in the catchment (65%). Only a small proportion of the water in the creek is contributed by total base-flow (35%). This finding supports the results of the stable isotopes 16O/18O and 2H/H, which show the main source of water in the creeks is either from local precipitation or irrigation waters delivered by surface runoff; a contribution from the groundwater (baseflow) to the creeks could not be identified using 16O/18O and 2H/H. In addition, the SWAT model calculated that around 68% of the rainfall occurring in the catchment is lost through evapotranspiration reflecting the prevailing long-term drought conditions that were observed prior and during the study. Stream discharge from the forested subcatchment was an order of magnitude lower than discharge from the agricultural Six Mile Creek subcatchment. A change in land use from forestry to agriculture did not significantly change the catchment water balance, however, nutrient loads increased considerably. Conversely, a simulated change from agriculture to forestry resulted in a significant decrease of nitrogen loads. The findings of the thesis and the approach used are shown to be of value to catchment water quality monitoring on a wider scale, in particular the implications of mixed land use on nutrient forms, distributions and concentrations. The study confirms that in the tropics and subtropics the water balance is affected by extended dry periods and seasonal rainfall with intensive storm events. In particular, the comprehensive data set of inorganic and organic N and P forms in the surface and groundwaters of this subtropical setting acquired during the one year sampling program may be used in similar catchment hydrological studies where these detailed information is missing. Also, the study concludes that riparian buffer zones along the catchment drainage system attenuate the transport of nitrogen from agricultural sources in the surface water. Concentrations of N decreased from upstream to downstream locations and were negligible at the outlet of the catchment.
Resumo:
Adolescent injury remains a significant public health concern and is often the result of at-risk transport related behaviours. When a person is injured actions taken by bystanders are of crucial importance and timely first aid appears to reduce the severity of some injuries (Hussain & Redmond, 1994). Accordingly, researchers have suggested that first aid training should be more widely available as a potential strategy to reduce injury (Lynch et al., 2006). Further research has identified schools as an ideal setting for learning first aid skills as a means of injury prevention (Maitra, 1997). The current research examines the implications of school based first aid training for young adolescents on injury prevention, particularly relating to transport injuries. First aid training was integrated with peer protection and school connectedness within the Skills for Preventing Injury in Youth (SPIY) program (Buckley & Sheehan, 2009) and evaluated to determine if there was a reduction in the likelihood of transport related injuries at six months post-intervention. In Queensland, Australia, 35 high schools were recruited and randomly assigned to intervention and control conditions in early April 2012. A total of 2,000 Year nine students (mean age 13.5 years, 39% male) completed surveys six months post-intervention in November 2012. Analyses will compare the intervention students with control group students who self-reported i) first aid training with a teacher, professional or other adult and ii) no first aid in the preceding six months. Using the Extended Adolescent Injury Checklist (E-AIC) (Chapman, Buckley & Sheehan, 2011) the transport related injury experiences included being injured while “riding as a passenger in a car”, “driving a car off road” and “riding a bicycle”. It is expected that students taught first aid within SPIY will report significantly fewer transport related injuries in the previous three months, compared to the control groups described above. Analyses will be conducted separately for sex and socio-economic class of schools. Findings from this study will provide insight into the value of first aid in adolescent injury prevention and provide evidence as to whether teaching first aid skills within a school based health education curriculum has traffic safety implications.
Resumo:
Introduction: Although advances in treatment modalities have improved the survival of head and neck (H&N) cancer patients over recent years, survivors’ quality of life (QoL) could be impaired for a number of reasons. The investigation of QoL determinants can inform the design of supportive interventions for this population. Objectives: To examine the QoL of H&N cancer survivors at 1 year after treatment and to identify potential determinants affecting their QoL. Methods: A systematic search of literature was done in December 2011 in five databases: Pubmed, Medline, Scopus, Sciencedirect and CINAHL, using combined search terms ‘head and neck cancer’, ‘quality of life’, ‘health-related quality of life’ and ‘systematic review’. The methodological qualities of selected studies were assessed by two reviewers using predefined criteria. The study characteristics and results were abstracted and summarized. Results: Thirty-seven studies met all inclusion criteria with methodological quality from moderate to high. The global QoL of H&N cancer survivors returned to baseline at 1 year after treatment. Significant improvement showed in emotional functioning while physical functioning, xerostomia, sticky/insufficient saliva, and fatigue were consistently worse at 12 months compared with baseline. Age, cancer sites and stages, social support, smoking, presence of feeding tube are significant QoL determinants at 12 months. Conclusions: Although the global QoL of H&N cancer survivors recover by 12 months after treatment, problems with physical functioning, fatigue, xerostomia and sticky saliva persist. Regular assessment should be carried out to monitor these problems. Further research is required to develop appropriate and effective interventions for this population.
Resumo:
A finely-tuned innate immune response plays a pivotal role in protecting host against bacterial invasion during periodontal disease progression. Hyperlipidemia has been suggested to exacerbate periodontal health condition. However, the underlying mechanism has not been addressed. In the present study, we investigated the effect of hyperlipidemia on innate immune responses to periodontal pathogen Porphyromonas gingivalis infection. Apolipoprotein E-deficient and wild-type mice at the age of 20 weeks were used for the study. Peritoneal macrophages were isolated and subsequently used for the study of viable P. gingivalis infection. ApoE−/− mice demonstrated inhibited iNOS production and impaired clearance of P. gingivalis in vitro and in vivo; furthermore, ApoE−/− mice displayed disrupted cytokine production pattern in response to P. gingivalis, with a decreased production of tumor necrosis factor-α, interleukin-6 (IL-6), IL-1β and monocyte chemotactic protein-1. Microarray data demonstrated that Toll-like receptor (TLR) and NOD-like receptor (NLR) pathway were altered in ApoE−/− mice macrophages; further analysis of pattern recognition receptors (PRRs) demonstrated that expression of triggering receptors on myeloid cells-1 (TREM-1), an amplifier of the TLR and NLR pathway, was decreased in ApoE−/− mice macrophages, leading to decreased recruitment of NF-κB onto the promoters of the TNF-α and IL-6. Our data suggest that in ApoE−/− mice hyperlipidemia disrupts the expression of PRRs, and cripples the host’s capability to generate sufficient innate immune response to P. gingivalis, which may facilitate immune evasion, subgingival colonization and establishment of P. gingivalis in the periodontal niche.
Resumo:
Young male drivers are over-represented in road-related fatalities. Speeding represents a pervasive and significant contributor to road trauma. Anti-speeding messages represent a long-standing strategy aimed at discouraging drivers from speeding. These messages, however, have not always achieved their persuasive objectives which may be due, in part, to them not always targeting the most salient beliefs underpinning the speeding behavior of particular driver groups. The current study elicited key beliefs underpinning speeding behavior as well as strategies used to avoid speeding, using a well-validated belief-based model, the Theory of Planned Behavior and in-depth qualitative methods. To obtain the most comprehensive understanding about the salient beliefs and strategies of young male drivers, how such beliefs and strategies compared with those of drivers of varying ages and gender, was also explored. Overall, 75 males and females (aged 17-25 or 30-55 years) participated in group discussions. The findings revealed beliefs that were particularly relevant to young males and that would likely represent key foci for developing message content. For instance, the need to feel in control and the desire to experience positive affect when driving were salient advantages; while infringements were a salient disadvantage and, in particular, the loss of points and the implications associated with potential licence loss as opposed to the monetary (fine) loss (behavioral beliefs). For normative influences, young males appeared to hold notable misperceptions (compared with other drivers, such as young females); for instance, young males believed that females/girlfriends were impressed by their speeding. In the case of control beliefs, the findings revealed low perceptions of control with respect to being able to not speed and a belief that something “extraordinary” would need to happen for a young male driver to lose control of their vehicle while speeding. The practical implications of the findings, in terms of providing suggestions for devising the content of anti-speeding messages, are discussed.
Resumo:
The use of intelligent transport systems is proliferating across the Australian road network, particularly on major freeways. New technology allows a greater range of signs and messages to be displayed to drivers. While there has been a long history of human factors analyses of signage, no evaluation has been conducted on this novel, sometimes dynamic, signage or potential interactions when co-located. The purpose of this driving simulator study was to investigate drivers’ behavioural changes and comprehension resulting from the co-location of Lane Use Management Systems with static signs and (Enhanced) Variable Message Signs on Queensland motorways. A section of motorway was simulated, and nine scenarios were developed which presented a combination of signage cases across levels of driving task complexity. Two higher-risk road user groups were targeted for this research on an advanced driving simulator: older (65+ years, N=21) and younger (18-22 years, N=20) drivers. Changes in sign co-location and task complexity had small effect on driver comprehension of the signs and vehicle dynamics variables, including difference with the posted speed limit, headway, standard deviation of lane keeping and brake jerks. However, increasing the amount of information provided to drivers at a given location (by co-locating several signs) increased participants’ gaze duration on the signs. With co-location of signs and without added task complexity, a single gaze was over 2s for more than half of the population tested for both groups, and up to 6 seconds for some individuals.
Resumo:
Background: Side effects of the medications used for procedural sedation and analgesia in the cardiac catheterisation laboratory are known to cause impaired respiratory function. Impaired respiratory function poses considerable risk to patient safety as it can lead to inadequate oxygenation. Having knowledge about the conditions that predict impaired respiratory function prior to the procedure would enable nurses to identify at-risk patients and selectively implement intensive respiratory monitoring. This would reduce the possibility of inadequate oxygenation occurring. Aim: To identify pre-procedure risk factors for impaired respiratory function during nurse-administered procedural sedation and analgesia in the cardiac catheterisation laboratory. Design: Retrospective matched case–control. Methods: 21 cases of impaired respiratory function were identified and matched to 113 controls from a consecutive cohort of patients over 18 years of age. Conditional logistic regression was used to identify risk factors for impaired respiratory function. Results: With each additional indicator of acute illness, case patients were nearly two times more likely than their controls to experience impaired respiratory function (OR 1.78; 95% CI 1.19–2.67; p = 0.005). Indicators of acute illness included emergency admission, being transferred from a critical care unit for the procedure or requiring respiratory or haemodynamic support in the lead up to the procedure. Conclusion: Several factors that predict the likelihood of impaired respiratory function were identified. The results from this study could be used to inform prospective studies investigating the effectiveness of interventions for impaired respiratory function during nurse-administered procedural sedation and analgesia in the cardiac catheterisation laboratory.