480 resultados para Overall Equipment Effectiveness


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Many governments throughout the world rely heavily on traffic law enforcement programs to modify driver behaviour and enhance road safety. There are two related functions of traffic law enforcement, apprehension and deterrence, and these are achieved through three processes: the establishment of traffic laws, the policing of those laws, and the application of penalties and sanctions to offenders. Traffic policing programs can vary by visibility (overt or covert) and deployment methods (scheduled and non-scheduled), while sanctions can serve to constrain, deter or reform offending behaviour. This chapter will review the effectiveness of traffic law enforcement strategies from the perspective of a range of high-risk, illegal driving behaviours including drink/drug driving, speeding, seat belt use and red light running. Additionally, this chapter discusses how traffic police are increasingly using technology to enforce traffic laws and thus reduce crashes. The chapter concludes that effective traffic policing involves a range of both overt and covert operations and includes a mix of automatic and more traditional manual enforcement methods. It is important to increase both the perceived and actual risk of detection by ensuring that traffic law enforcement operations are sufficiently intensive, unpredictable in nature and conducted as widely as possible across the road network. A key means of maintaining the unpredictability of operations is through the random deployment of enforcement and/or the random checking of drivers. The impact of traffic enforcement is also heightened when it is supported by public education campaigns. In the future, technological improvements will allow the use of more innovative enforcement strategies. Finally, further research is needed to continue the development of traffic policing approaches and address emerging road safety issues.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The incidence of sleep-related crashes has been estimated to account for approximately 20% of all fatal and severe crashes. The use of sleepiness countermeasures by drivers is an important component to reduce the incidence rates of sleep-related crashes. Taking a brief nap and stopping for a rest break are two highly publicised countermeasures for driver sleepiness and are also believed by drivers to be the most effective countermeasures. Despite this belief, there is scarce evidence to support the utility of these countermeasures for reducing driver sleepiness levels. Therefore, determining the effectiveness of these countermeasures is an important road safety concern. The current study utilised a young adult sample (N = 20) to investigate the effectiveness of a nap and an active rest break. The countermeasures effects were evaluated by physiological, behavioural (hazard perception skill), and subjective measures previously found sensitive to sleepiness. Participants initially completed two hours of a simulated driving task followed by a 15 minute nap opportunity or a 15 minute active rest break that included 10 minutes of brisk walking. After the break, participants completed one final hour of the simulated driving task. A within-subjects design was used so that each participant completed both the nap and the active rest break conditions on separate occasions. The analyses revealed that only the nap break provided any meaningful reduction in physiological sleepiness, reduced subjective sleepiness levels, and maintained hazard perception performance. In contrast, the active rest break had no effect for reducing physiological sleepiness and resulted in a decrement in hazard perception performance (i.e., an increase of reaction time latencies), with a transient reduction in subjective sleepiness levels. A number of theoretical, empirical and practical issues were identified by the current study.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background Improving timely access to reperfusion is a major goal of ST-segment–elevation myocardial infarction care. We sought to compare the population impact of interventions proposed to improve timely access to reperfusion therapy in Australia. Methods and Results Australian hospitals, population, and road network data were integrated using Geographical Information Systems. Hospitals were classified into those that provided primary percutaneous coronary intervention (PPCI) or fibrinolysis. Population impact of interventions proposed to improve timely access to reperfusion (PPCI, fibrinolysis, or both) were modeled and compared. Timely access to reperfusion was defined as the proportion of the population capable of reaching a fibrinolysis facility ≤60 minutes or a PPCI facility ≤120 minutes from emergency medical services activation. The majority (93.2%) of the Australian population has timely access to reperfusion, mainly (53%) through fibrinolysis. Only 40.2% of the population had timely access to PPCI, and access to PPCI services is particularly limited in regional and nonexistent in remote areas. Optimizing the emergency medical services’ response or increasing PPCI services resulted in marginal improvement in timely access (1.8% and 3.7%, respectively). Direct transport to PPCI facilities and interhospital transfer for PPCI improves timely access to PPCI for 19.4% and 23.5% of the population, respectively. Prehospital fibrinolysis markedly improved access to timely reperfusion in regional and remote Australia. Conclusions Significant gaps in timely provision of reperfusion remain in Australia. Systematic implementation of changes in service delivery has potential to improve timely access to PPCI for a majority of the population and improve access to fibrinolysis to those living in regional and remote areas.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This article reviews the literature on the outcome of flapless surgery for dental implants in the posterior maxilla. The literature search was carried out in using the keywords: flapless, dental implants and maxilla. A hand search and Medline search were carried out on studies published between 1971 and 2011. The authors included research involving a minimum of 15 dental implants with a followup period of 1 year, an outcome measurement of implant survival, but excluded studies involving multiple simultaneous interventions, and studies with missing data. The Cochrane approach for cohort studies and Oxford Centre for Evidence- Based Medicine were applied. Of the 56 published papers selected, 14 papers on the flapless technique showed high overall implant survival rates. The prospective studies yielded 97.01% (95% CI: 90.72–99.0) while retrospective studies or case series illustrated 95.08% (95% CI: 91.0–97.93) survival. The average of intraoperative complications was 6.55% using the flapless procedure. The limited data obtained showed that flapless surgery in posterior maxilla areas could be a viable and predictable treatment method for implant placement. Flapless surgery tends to be more applicable in this area of the mouth. Further long-term clinical controlled studies are needed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In order to support intelligent transportation system (ITS) road safety applications such as collision avoidance, lane departure warnings and lane keeping, Global Navigation Satellite Systems (GNSS) based vehicle positioning system has to provide lane-level (0.5 to 1 m) or even in-lane-level (0.1 to 0.3 m) accurate and reliable positioning information to vehicle users. However, current vehicle navigation systems equipped with a single frequency GPS receiver can only provide road-level accuracy at 5-10 meters. The positioning accuracy can be improved to sub-meter or higher with the augmented GNSS techniques such as Real Time Kinematic (RTK) and Precise Point Positioning (PPP) which have been traditionally used in land surveying and or in slowly moving environment. In these techniques, GNSS corrections data generated from a local or regional or global network of GNSS ground stations are broadcast to the users via various communication data links, mostly 3G cellular networks and communication satellites. This research aimed to investigate the precise positioning system performances when operating in the high mobility environments. This involves evaluation of the performances of both RTK and PPP techniques using: i) the state-of-art dual frequency GPS receiver; and ii) low-cost single frequency GNSS receiver. Additionally, this research evaluates the effectiveness of several operational strategies in reducing the load on data communication networks due to correction data transmission, which may be problematic for the future wide-area ITS services deployment. These strategies include the use of different data transmission protocols, different correction data format standards, and correction data transmission at the less-frequent interval. A series of field experiments were designed and conducted for each research task. Firstly, the performances of RTK and PPP techniques were evaluated in both static and kinematic (highway with speed exceed 80km) experiments. RTK solutions achieved the RMS precision of 0.09 to 0.2 meter accuracy in static and 0.2 to 0.3 meter in kinematic tests, while PPP reported 0.5 to 1.5 meters in static and 1 to 1.8 meter in kinematic tests by using the RTKlib software. These RMS precision values could be further improved if the better RTK and PPP algorithms are adopted. The tests results also showed that RTK may be more suitable in the lane-level accuracy vehicle positioning. The professional grade (dual frequency) and mass-market grade (single frequency) GNSS receivers were tested for their performance using RTK in static and kinematic modes. The analysis has shown that mass-market grade receivers provide the good solution continuity, although the overall positioning accuracy is worse than the professional grade receivers. In an attempt to reduce the load on data communication network, we firstly evaluate the use of different correction data format standards, namely RTCM version 2.x and RTCM version 3.0 format. A 24 hours transmission test was conducted to compare the network throughput. The results have shown that 66% of network throughput reduction can be achieved by using the newer RTCM version 3.0, comparing to the older RTCM version 2.x format. Secondly, experiments were conducted to examine the use of two data transmission protocols, TCP and UDP, for correction data transmission through the Telstra 3G cellular network. The performance of each transmission method was analysed in terms of packet transmission latency, packet dropout, packet throughput, packet retransmission rate etc. The overall network throughput and latency of UDP data transmission are 76.5% and 83.6% of TCP data transmission, while the overall accuracy of positioning solutions remains in the same level. Additionally, due to the nature of UDP transmission, it is also found that 0.17% of UDP packets were lost during the kinematic tests, but this loss doesn't lead to significant reduction of the quality of positioning results. The experimental results from the static and the kinematic field tests have also shown that the mobile network communication may be blocked for a couple of seconds, but the positioning solutions can be kept at the required accuracy level by setting of the Age of Differential. Finally, we investigate the effects of using less-frequent correction data (transmitted at 1, 5, 10, 15, 20, 30 and 60 seconds interval) on the precise positioning system. As the time interval increasing, the percentage of ambiguity fixed solutions gradually decreases, while the positioning error increases from 0.1 to 0.5 meter. The results showed the position accuracy could still be kept at the in-lane-level (0.1 to 0.3 m) when using up to 20 seconds interval correction data transmission.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The human Ureaplasma species are the most frequently isolated bacteria from the upper genital tract of pregnant women and can cause clinically asymptomatic, intra-uterine infections, which are difficult to treat with antimicrobials. Ureaplasma infection of the upper genital tract during pregnancy has been associated with numerous adverse outcomes including preterm birth, chorioamnionitis and neonatal respiratory diseases. The mechanisms by which ureaplasmas are able to chronically colonise the amniotic fluid and avoid eradication by (i) the host immune response and (ii) maternally-administered antimicrobials, remain virtually unexplored. To address this gap within the literature, this study investigated potential mechanisms by which ureaplasmas are able to cause chronic, intra-amniotic infections in an established ovine model. In this PhD program of research the effectiveness of standard, maternal erythromycin for the treatment of chronic, intra-amniotic ureaplasma infections was evaluated. At 55 days of gestation pregnant ewes received an intra-amniotic injection of either: a clinical Ureaplasma parvum serovar 3 isolate that was sensitive to macrolide antibiotics (n = 16); or 10B medium (n = 16). At 100 days of gestation, ewes were then randomised to receive either maternal erythromycin treatment (30 mg/kg/day for four days) or no treatment. Ureaplasmas were isolated from amniotic fluid, chorioamnion, umbilical cord and fetal lung specimens, which were collected at the time of preterm delivery of the fetus (125 days of gestation). Surprisingly, the numbers of ureaplasmas colonising the amniotic fluid and fetal tissues were not different between experimentally-infected animals that received erythromycin treatment or infected animals that did not receive treatment (p > 0.05), nor were there any differences in fetal inflammation and histological chorioamnionitis between these groups (p > 0.05). These data demonstrate the inability of maternal erythromycin to eradicate intra-uterine ureaplasma infections. Erythromycin was detected in the amniotic fluid of animals that received antimicrobial treatment (but not in those that did not receive treatment) by liquid chromatography-mass spectrometry; however, the concentrations were below therapeutic levels (<10 – 76 ng/mL). These findings indicate that the ineffectiveness of standard, maternal erythromycin treatment of intra-amniotic ureaplasma infections may be due to the poor placental transfer of this drug. Subsequently, the phenotypic and genotypic characteristics of ureaplasmas isolated from the amniotic fluid and chorioamnion of pregnant sheep after chronic, intra-amniotic infection and low-level exposure to erythromycin were investigated. At 55 days of gestation twelve pregnant ewes received an intra-amniotic injection of a clinical U. parvum serovar 3 isolate, which was sensitive to macrolide antibiotics. At 100 days of gestation, ewes received standard maternal erythromycin treatment (30 mg/kg/day for four days, n = 6) or saline (n = 6). Preterm fetuses were surgically delivered at 125 days of gestation and ureaplasmas were cultured from the amniotic fluid and the chorioamnion. The minimum inhibitory concentrations (MICs) of erythromycin, azithromycin and roxithromycin were determined for cultured ureaplasma isolates, and antimicrobial susceptibilities were different between ureaplasmas isolated from the amniotic fluid (MIC range = 0.08 – 1.0 mg/L) and chorioamnion (MIC range = 0.06 – 5.33 mg/L). However, the increased resistance to macrolide antibiotics observed in chorioamnion ureaplasma isolates occurred independently of exposure to erythromycin in vivo. Remarkably, domain V of the 23S ribosomal RNA gene (which is the target site of macrolide antimicrobials) of chorioamnion ureaplasmas demonstrated significant variability (125 polymorphisms out of 422 sequenced nucleotides, 29.6%) when compared to the amniotic fluid ureaplasma isolates and the inoculum strain. This sequence variability did not occur as a consequence of exposure to erythromycin, as the nucleotide substitutions were identical between chorioamnion ureaplasmas isolated from different animals, including those that did not receive erythromycin treatment. We propose that these mosaic-like 23S ribosomal RNA gene sequences may represent gene fragments transferred via horizontal gene transfer. The significant differences observed in (i) susceptibility to macrolide antimicrobials and (ii) 23S ribosomal RNA sequences of ureaplasmas isolated from the amniotic fluid and chorioamnion suggests that the anatomical site from which they were isolated may exert selective pressures that alter the socio-microbiological structure of the bacterial population, by selecting for genetic changes and altered antimicrobial susceptibility profiles. The final experiment for this PhD examined antigenic size variation of the multiple banded antigen (MBA, a surface-exposed lipoprotein and predicted ureaplasmal virulence factor) in chronic, intra-amniotic ureaplasma infections. Previously defined ‘virulent-derived’ and ‘avirulent-derived’ clonal U. parvum serovar 6 isolates (each expressing a single MBA protein) were injected into the amniotic fluid of pregnant ewes (n = 20) at 55 days of gestation, and amniotic fluid was collected by amniocentesis every two weeks until the time of near-term delivery of the fetus (at 140 days of gestation). Both the avirulent and virulent clonal ureaplasma strains generated MBA size variants (ranging in size from 32 – 170 kDa) within the amniotic fluid of pregnant ewes. The mean number of MBA size variants produced within the amniotic fluid was not different between the virulent (mean = 4.2 MBA variants) and avirulent (mean = 4.6 MBA variants) ureaplasma strains (p = 0.87). Intra-amniotic infection with the virulent strain was significantly associated with the presence of meconium-stained amniotic fluid (p = 0.01), which is an indicator of fetal distress in utero. However, the severity of histological chorioamnionitis was not different between the avirulent and virulent groups. We demonstrated that ureaplasmas were able to persist within the amniotic fluid of pregnant sheep for 85 days, despite the host mounting an innate and adaptive immune response. Pro-inflammatory cytokines (interleukin (IL)-1â, IL-6 and IL-8) were elevated within the chorioamnion tissue of pregnant sheep from both the avirulent and virulent treatment groups, and this was significantly associated with the production of anti-ureaplasma IgG antibodies within maternal sera (p < 0.05). These findings suggested that the inability of the host immune response to eradicate ureaplasmas from the amniotic cavity may be due to continual size variation of MBA surface-exposed epitopes. Taken together, these data confirm that ureaplasmas are able to cause long-term in utero infections in a sheep model, despite standard antimicrobial treatment and the development of a host immune response. The overall findings of this PhD project suggest that ureaplasmas are able to cause chronic, intra-amniotic infections due to (i) the limited placental transfer of erythromycin, which prevents the accumulation of therapeutic concentrations within the amniotic fluid; (ii) the ability of ureaplasmas to undergo rapid selection and genetic variation in vivo, resulting in ureaplasma isolates with variable MICs to macrolide antimicrobials colonising the amniotic fluid and chorioamnion; and (iii) antigenic size variation of the MBA, which may prevent eradication of ureaplasmas by the host immune response and account for differences in neonatal outcomes. The outcomes of this program of study have improved our understanding of the biology and pathogenesis of this highly adapted microorganism.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Nutrition interventions in the form of both self-management education and individualised diet therapy are considered essential for the long-term management of type 2 diabetes mellitus (T2DM). The measurement of diet is essential to inform, support and evaluate nutrition interventions in the management of T2DM. Barriers inherent within health care settings and systems limit ongoing access to personnel and resources, while traditional prospective methods of assessing diet are burdensome for the individual and often result in changes in typical intake to facilitate recording. This thesis investigated the inclusion of information and communication technologies (ICT) to overcome limitations to current approaches in the nutritional management of T2DM, in particular the development, trial and evaluation of the Nutricam dietary assessment method (NuDAM) consisting of a mobile phone photo/voice application to assess nutrient intake in a free-living environment with older adults with T2DM. Study 1: Effectiveness of an automated telephone system in promoting change in dietary intake among adults with T2DM The effectiveness of an automated telephone system, Telephone-Linked Care (TLC) Diabetes, designed to deliver self-management education was evaluated in terms of promoting dietary change in adults with T2DM and sub-optimal glycaemic control. In this secondary data analysis independent of the larger randomised controlled trial, complete data was available for 95 adults (59 male; mean age(±SD)=56.8±8.1 years; mean(±SD)BMI=34.2±7.0kg/m2). The treatment effect showed a reduction in total fat of 1.4% and saturated fat of 0.9% energy intake, body weight of 0.7 kg and waist circumference of 2.0 cm. In addition, a significant increase in the nutrition self-efficacy score of 1.3 (p<0.05) was observed in the TLC group compared to the control group. The modest trends observed in this study indicate that the TLC Diabetes system does support the adoption of positive nutrition behaviours as a result of diabetes self-management education, however caution must be applied in the interpretation of results due to the inherent limitations of the dietary assessment method used. The decision to use a close-list FFQ with known bias may have influenced the accuracy of reporting dietary intake in this instance. This study provided an example of the methodological challenges experienced with measuring changes in absolute diet using a FFQ, and reaffirmed the need for novel prospective assessment methods capable of capturing natural variance in usual intakes. Study 2: The development and trial of NuDAM recording protocol The feasibility of the Nutricam mobile phone photo/voice dietary record was evaluated in 10 adults with T2DM (6 Male; age=64.7±3.8 years; BMI=33.9±7.0 kg/m2). Intake was recorded over a 3-day period using both Nutricam and a written estimated food record (EFR). Compared to the EFR, the Nutricam device was found to be acceptable among subjects, however, energy intake was under-recorded using Nutricam (-0.6±0.8 MJ/day; p<0.05). Beverages and snacks were the items most frequently not recorded using Nutricam; however forgotten meals contributed to the greatest difference in energy intake between records. In addition, the quality of dietary data recorded using Nutricam was unacceptable for just under one-third of entries. It was concluded that an additional mechanism was necessary to complement dietary information collected via Nutricam. Modifications to the method were made to allow for clarification of Nutricam entries and probing forgotten foods during a brief phone call to the subject the following morning. The revised recording protocol was evaluated in Study 4. Study 3: The development and trial of the NuDAM analysis protocol Part A explored the effect of the type of portion size estimation aid (PSEA) on the error associated with quantifying four portions of 15 single foods items contained in photographs. Seventeen dietetic students (1 male; age=24.7±9.1 years; BMI=21.1±1.9 kg/m2) estimated all food portions on two occasions: without aids and with aids (food models or reference food photographs). Overall, the use of a PSEA significantly reduced mean (±SD) group error between estimates compared to no aid (-2.5±11.5% vs. 19.0±28.8%; p<0.05). The type of PSEA (i.e. food models vs. reference food photograph) did not have a notable effect on the group estimation error (-6.7±14.9% vs. 1.4±5.9%, respectively; p=0.321). This exploratory study provided evidence that the use of aids in general, rather than the type, was more effective in reducing estimation error. Findings guided the development of the Dietary Estimation and Assessment Tool (DEAT) for use in the analysis of the Nutricam dietary record. Part B evaluated the effect of the DEAT on the error associated with the quantification of two 3-day Nutricam dietary records in a sample of 29 dietetic students (2 males; age=23.3±5.1 years; BMI=20.6±1.9 kg/m2). Subjects were randomised into two groups: Group A and Group B. For Record 1, the use of the DEAT (Group A) resulted in a smaller error compared to estimations made without the tool (Group B) (17.7±15.8%/day vs. 34.0±22.6%/day, p=0.331; respectively). In comparison, all subjects used the DEAT to estimate Record 2, with resultant error similar between Group A and B (21.2±19.2%/day vs. 25.8±13.6%/day; p=0.377 respectively). In general, the moderate estimation error associated with quantifying food items did not translate into clinically significant differences in the nutrient profile of the Nutricam dietary records, only amorphous foods were notably over-estimated in energy content without the use of the DEAT (57kJ/day vs. 274kJ/day; p<0.001). A large proportion (89.6%) of the group found the DEAT helpful when quantifying food items contained in the Nutricam dietary records. The use of the DEAT reduced quantification error, minimising any potential effect on the estimation of energy and macronutrient intake. Study 4: Evaluation of the NuDAM The accuracy and inter-rater reliability of the NuDAM to assess energy and macronutrient intake was evaluated in a sample of 10 adults (6 males; age=61.2±6.9 years; BMI=31.0±4.5 kg/m2). Intake recorded using both the NuDAM and a weighed food record (WFR) was coded by three dietitians and compared with an objective measure of total energy expenditure (TEE) obtained using the doubly labelled water technique. At the group level, energy intake (EI) was under-reported to a similar extent using both methods, with the ratio of EI:TEE was 0.76±0.20 for the NuDAM and 0.76±0.17 for the WFR. At the individual level, four subjects reported implausible levels of energy intake using the WFR method, compared to three using the NuDAM. Overall, moderate to high correlation coefficients (r=0.57-0.85) were found across energy and macronutrients except fat (r=0.24) between the two dietary measures. High agreement was observed between dietitians for estimates of energy and macronutrient derived for both the NuDAM (ICC=0.77-0.99; p<0.001) and WFR (ICC=0.82-0.99; p<0.001). All subjects preferred using the NuDAM over the WFR to record intake and were willing to use the novel method again over longer recording periods. This research program explored two novel approaches which utilised distinct technologies to aid in the nutritional management of adults with T2DM. In particular, this thesis makes a significant contribution to the evidence base surrounding the use of PhRs through the development, trial and evaluation of a novel mobile phone photo/voice dietary record. The NuDAM is an extremely promising advancement in the nutritional management of individuals with diabetes and other chronic conditions. Future applications lie in integrating the NuDAM with other technologies to facilitate practice across the remaining stages of the nutrition care process.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Six Sigma technique is one of the quality management strategies and is utilised for improving the quality and productivity in the manufacturing process. It is inspired by the two major project methodologies of Deming’s "Plan – Do – Check – Act (PDCA)" Cycle which consists of DMAIC and DMADV. Those two methodologies are comprised of five phases. The DMAIC project methodology will be comprehensively used in this research. In brief, DMAIC is utilised for improving the existing manufacturing process and it involves the phases Define, Measure, Analyse, Improve, and Control. Mask industry has become a significant industry in today’s society since the outbreak of some serious diseases such as the Severe Acute Respiratory Syndrome (SARS), bird flu, influenza, swine flu and hay fever. Protecting the respiratory system, then, has become the fundamental requirement for preventing respiratory deceases. Mask is the most appropriate and protective product inasmuch as it is effective in protecting the respiratory tract and resisting the virus infection through air. In order to satisfy various customers’ requirements, thousands of mask products are designed in the market. Moreover, masks are also widely used in industries including medical industries, semi-conductor industries, food industries, traditional manufacturing, and metal industries. Notwithstanding the quality of masks have become the prioritisations since they are used to prevent dangerous diseases and safeguard people, the quality improvement technique are of very high significance in mask industry. The purpose of this research project is firstly to investigate the current quality control practices in a mask industry, then, to explore the feasibility of using Six Sigma technique in that industry, and finally, to implement the Six Sigma technique in the case company to develop and evaluate the product quality process. This research mainly investigates the quality problems of musk industry and effectiveness of six sigma technique in musk industry with the United Excel Enterprise Corporation (UEE) Company as a case company. The DMAIC project methodology in the Six Sigma technique is adopted and developed in this research. This research makes significant contribution to knowledge. The main results contribute to the discovering the root causes of quality problems in a mask industry. Secondly, the company was able to increase not only acceptance rate but quality level by utilising the Six Sigma technique. Hence, utilising the Six Sigma technique could increase the production capacity of the company. Third, the Six Sigma technique is necessary to be extensively modified to improve the quality control in the mask industry. The impact of the Six Sigma technique on the overall performance in the business organisation should be further explored in future research.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Driving and using prescription medicines that have the potential to impair driving is an emerging research area. To date it is characterised by a limited (although growing) number of studies and methodological complexities that make generalisations about impairment due to medications difficult. Consistent evidence has been found for the impairing effects of hypnotics, sedative antidepressants and antihistamines, and narcotic analgesics, although it has been estimated that as many as nine medication classes have the potential to impair driving (Alvarez & del Rio, 2000; Walsh, de Gier, Christopherson, & Verstraete, 2004). There is also evidence for increased negative effects related to concomitant use of other medications and alcohol (Movig et al., 2004; Pringle, Ahern, Heller, Gold, & Brown, 2005). Statistics on the high levels of Australian prescription medication use suggest that consumer awareness of driving impairment due to medicines should be examined. One web-based study has found a low level of awareness, knowledge and risk perceptions among Australian drivers about the impairing effects of various medications on driving (Mallick, Johnston, Goren, & Kennedy, 2007). The lack of awareness and knowledge brings into question the effectiveness of the existing countermeasures. In Australia these consist of the use of ancillary warning labels administered under mandatory regulation and professional guidelines, advice to patients, and the use of Consumer Medicines Information (CMI) with medications that are known to cause impairment. The responsibility for the use of the warnings and related counsel to patients primarily lies with the pharmacist when dispensing relevant medication. A review by the Therapeutic Goods Administration (TGA) noted that in practice, advice to patients may not occur and that CMI is not always available (TGA, 2002). Researchers have also found that patients' recall of verbal counsel is very low (Houts, Bachrach, Witmer, Tringali, Bucher, & Localio, 1998). With healthcare observed as increasingly being provided in outpatient conditions (Davis et al., 2006; Vingilis & MacDonald, 2000), establishing the effectiveness of the warning labels as a countermeasure is especially important. There have been recent international developments in medication categorisation systems and associated medication warning labels. In 2005, France implemented a four-tier medication categorisation and warning system to improve patients' and health professionals' awareness and knowledge of related road safety issues (AFSSAPS, 2005). This warning system uses a pictogram and indicates the level of potential impairment in relation to driving performance through the use of colour and advice on the recommended behaviour to adopt towards driving. The comparable Australian system does not indicate the severity level of potential effects, and does not provide specific guidelines on the attitude or actions that the individual should adopt towards driving. It is reliant upon the patient to be vigilant in self-monitoring effects, to understand the potential ways in which they may be affected and how serious these effects may be, and to adopt the appropriate protective actions. This thesis investigates the responses of a sample of Australian hospital outpatients who receive appropriate labelling and counselling advice about potential driving impairment due to prescribed medicines. It aims to provide baseline data on the understanding and use of relevant medications by a Queensland public hospital outpatient sample recruited through the hospital pharmacy. It includes an exploration and comparison of the effect of the Australian and French medication warning systems on medication user knowledge, attitudes, beliefs and behaviour, and explores whether there are areas in which the Australian system may be improved by including any beneficial elements of the French system. A total of 358 outpatients were surveyed, and a follow-up telephone survey was conducted with a subgroup of consenting participants who were taking at least one medication that required an ancillary warning label about driving impairment. A complementary study of 75 French hospital outpatients was also conducted to further investigate the performance of the warnings. Not surprisingly, medication use among the Australian outpatient sample was high. The ancillary warning labels required to appear on medications that can impair driving were prevalent. A subgroup of participants was identified as being potentially at-risk of driving impaired, based on their reported recent use of medications requiring an ancillary warning label and level of driving activity. The sample reported previous behaviour and held future intentions that were consistent with warning label advice and health protective action. Participants did not express a particular need for being advised by a health professional regarding fitness to drive in relation to their medication. However, it was also apparent from the analysis that the participants would be significantly more likely to follow advice from a doctor than a pharmacist. High levels of knowledge in terms of general principles about effects of alcohol, illicit drugs and combinations of substances, and related health and crash risks were revealed. This may reflect a sample specific effect. Emphasis is placed in the professional guidelines for hospital pharmacists that make it essential that advisory labels are applied to medicines where applicable and that warning advice is given to all patients on medication which may affect driving (SHPA, 2006, p. 221). The research program applied selected theoretical constructs from Schwarzer's (1992) Health Action Process Approach, which has extended constructs from existing health theories such as the Theory of Planned Behavior (Ajzen, 1991) to better account for the intention-behaviour gap often observed when predicting behaviour. This was undertaken to explore the utility of the constructs in understanding and predicting compliance intentions and behaviour with the mandatory medication warning about driving impairment. This investigation revealed that the theoretical constructs related to intention and planning to avoid driving if an effect from the medication was noticed were useful. Not all the theoretical model constructs that had been demonstrated to be significant predictors in previous research on different health behaviours were significant in the present analyses. Positive outcome expectancies from avoiding driving were found to be important influences on forming the intention to avoid driving if an effect due to medication was noticed. In turn, intention was found to be a significant predictor of planning. Other selected theoretical constructs failed to predict compliance with the Australian warning label advice. It is possible that the limited predictive power of a number of constructs including risk perceptions is due to the small sample size obtained at follow up on which the evaluation is based. Alternately, it is possible that the theoretical constructs failed to sufficiently account for issues of particular relevance to the driving situation. The responses of the Australian hospital outpatient sample towards the Australian and French medication warning labels, which differed according to visual characteristics and warning message, were examined. In addition, a complementary study with a sample of French hospital outpatients was undertaken in order to allow general comparisons concerning the performance of the warnings. While a large amount of research exists concerning warning effectiveness, there is little research that has specifically investigated medication warnings relating to driving impairment. General established principles concerning factors that have been demonstrated to enhance warning noticeability and behavioural compliance have been extrapolated and investigated in the present study. The extent to which there is a need for education and improved health messages on this issue was a core issue of investigation in this thesis. Among the Australian sample, the size of the warning label and text, and red colour were the most visually important characteristics. The pictogram used in the French labels was also rated highly, and was salient for a large proportion of the sample. According to the study of French hospital outpatients, the pictogram was perceived to be the most important visual characteristic. Overall, the findings suggest that the Australian approach of using a combination of visual characteristics was important for the majority of the sample but that the use of a pictogram could enhance effects. A high rate of warning recall was found overall and a further important finding was that higher warning label recall was associated with increased number of medication classes taken. These results suggest that increased vigilance and care are associated with the number of medications taken and the associated repetition of the warning message. Significantly higher levels of risk perception were found for the French Level 3 (highest severity) label compared with the comparable mandatory Australian ancillary Label 1 warning. Participants' intentions related to the warning labels indicated that they would be more cautious while taking potentially impairing medication displaying the French Level 3 label compared with the Australian Label 1. These are potentially important findings for the Australian context regarding the current driving impairment warnings about displayed on medication. The findings raise other important implications for the Australian labelling context. An underlying factor may be the differences in the wording of the warning messages that appear on the Australian and French labels. The French label explicitly states "do not drive" while the Australian label states "if affected, do not drive", and the difference in responses may reflect that less severity is perceived where the situation involves the consumer's self-assessment of their impairment. The differences in the assignment of responsibility by the Australian (the consumer assesses and decides) and French (the doctor assesses and decides) approaches for the decision to drive while taking medication raises the core question of who is most able to assess driving impairment due to medication: the consumer, or the health professional? There are pros and cons related to knowledge, expertise and practicalities with either option. However, if the safety of the consumer is the primary aim, then the trend towards stronger risk perceptions and more consistent and cautious behavioural intentions in relation to the French label suggests that this approach may be more beneficial for consumer safety. The observations from the follow-up survey, although based on a small sample size and descriptive in nature, revealed that just over half of the sample recalled seeing a warning label about driving impairment on at least one of their medications. The majority of these respondents reported compliance with the warning advice. However, the results indicated variation in responses concerning alcohol intake and modifying the dose of medication or driving habits so that they could continue to drive, which suggests that the warning advice may not be having the desired impact. The findings of this research have implications for current countermeasures in this area. These have included enhancing the role that prescribing doctors have in providing warnings and advice to patients about the impact that their medication can have on driving, increasing consumer perceptions of the authority of pharmacists on this issue, and the reinforcement of the warning message. More broadly, it is suggested that there would be benefit in a wider dissemination of research-based information on increased crash risk and systematic monitoring and publicity about the representation of medications in crashes resulting in injuries and fatalities. Suggestions for future research concern the continued investigation of the effects of medications and interactions with existing medical conditions and other substances on driving skills, effects of variations in warning label design, individual behaviours and characteristics (particularly among those groups who are dependent upon prescription medication) and validation of consumer self-assessment of impairment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Red light cameras (RLC) have been used to reduce right-angle collisions at signalized intersections. However, the effect of RLCs on motorcycle crashes has not been well investigated. The objective of this study is to evaluate the effectiveness of RLCs on motorcycle safety in Singapore. This is done by comparing their exposure, proneness of at-fault right-angle crashes as well as the resulting right-angle collisions at RLC with those at non-RLC sites. Estimating the crash vulnerability from not-at-fault crash involvements, the study shows that with a RLC, the relative crash vulnerability or crash-involved exposure of motorcycles at right-angle crashes is reduced. Furthermore, field investigation of motorcycle maneuvers reveal that at non-RLC arms, motorcyclists usually queue beyond the stop-line, facilitating an earlier discharge and hence become more exposed to the conflicting stream. However at arms with a RLC, motorcyclists are more restrained to avoid activating the RLC and hence become less exposed to conflicting traffic during the initial period of the green. The study also shows that in right-angle collisions, the proneness of at-fault crashes of motorcycles is lowest among all vehicle types. Hence motorcycles are more likely to be victims than the responsible parties in right-angle crashes. RLCs have also been found to be very effective in reducing at-fault crash involvements of other vehicle types which may implicate exposed motorcycles in the conflicting stream. Taking all these into account, the presence of RLCs should significantly reduce the vulnerability of motorcycles at signalized intersections.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper examines the instances and motivations for noble cause corruption perpetrated by NSW police officers. Noble cause corruption occurs when a person tries to produce a just outcome through unjust methods, for example, police manipulating evidence to ensure a conviction of a known offender. Normal integrity regime initiatives are unlikely to halt noble cause corruption as its basis lies in an attempt to do good by compensating for the apparent flaws in an unjust system. This paper suggests that the solution lies in a change of culture through improved leadership and uses the political theories of Roger Myerson to propose a possible solution. Evidence from police officers in transcripts of the Wood Inquiry (1997) are examined to discern their participation in noble cause corruption and their rationalisation of this behaviour. The overall findings are that officers were motivated to indulge in this type of corruption through a desire to produce convictions where they felt the system unfairly worked against their ability to do their job correctly. We have added to the literature by demonstrating that the rewards can be positive. Police are seeking job satisfaction through the ability to convict the guilty. They will be able to do this through better equipment and investigative powers.