296 resultados para PREDICTOR


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Newly licensed drivers on a provisional or intermediate licence have the highest crash risk when compared with any other group of drivers. In comparison, learner drivers have the lowest crash risk. Graduated driver licensing is one countermeasure that has been demonstrated to effectively reduce the crashes of novice drivers. This thesis examined the graduated driver licensing systems in two Australian states in order to better understand the behaviour of learner drivers, provisional drivers and the supervisors of learner drivers. By doing this, the thesis investigated the personal, social and environmental influences on novice driver behaviour as well as providing effective baseline data against which to measure subsequent changes to the licensing systems. In the first study, conducted prior to the changes to the graduated driver licensing system introduced in mid-2007, drivers who had recently obtained their provisional licence in Queensland and New South Wales were interviewed by telephone regarding their experiences while driving on their learner licence. Of the 687 eligible people approached to participate at driver licensing centres, 392 completed the study representing a response rate of 57.1 per cent. At the time the data was collected, New South Wales represented a more extensive graduated driver licensing system when compared with Queensland. The results suggested that requiring learners to complete a mandated number of hours of supervised practice impacts on the amount of hours that learners report completing. While most learners from New South Wales reported meeting the requirement to complete 50 hours of practice, it appears that many stopped practising soon after this goal was achieved. In contrast, learners from Queensland, who were not required to complete a specific number of hours at the time of the survey, tended to fall into three groups. The first group appeared to complete the minimum number of hours required to pass the test (less than 26 hours), the second group completed 26 to 50 hours of supervised practice while the third group completed significantly more practice than the first two groups (over 100 hours of supervised practice). Learner drivers in both states reported generally complying with the road laws and were unlikely to report that they had been caught breaking the road rules. They also indicated that they planned to obey the road laws once they obtained their provisional licence. However, they were less likely to intend to comply with recommended actions to reduce crash risk such as limiting their driving at night. This study also identified that there were relatively low levels of unaccompanied driving (approximately 15 per cent of the sample), very few driving offences committed (five per cent of the sample) and that learner drivers tended to use a mix of private and professional supervisors (although the majority of practice is undertaken with private supervisors). Consistent with the international literature, this study identified that very few learner drivers had experienced a crash (six per cent) while on their learner licence. The second study was also conducted prior to changes to the graduated driver licensing system and involved follow up interviews with the participants of the first study after they had approximately 21 months driving experience on their provisional licence. Of the 392 participants that completed the first study, 233 participants completed the second interview (representing a response rate of 59.4 per cent). As with the first study, at the time the data was collected, New South Wales had a more extensive graduated driver licensing system than Queensland. For instance, novice drivers from New South Wales were required to progress through two provisional licence phases (P1 and P2) while there was only one provisional licence phase in Queensland. Among the participants in this second study, almost all provisional drivers (97.9 per cent) owned or had access to a vehicle for regular driving. They reported that they were unlikely to break road rules, such as driving after a couple of drinks, but were also unlikely to comply with recommended actions, such as limiting their driving at night. When their provisional driving behaviour was compared to the stated intentions from the first study, the results suggested that their intentions were not a strong predictor of their subsequent behaviour. Their perception of risk associated with driving declined from when they first obtained their learner licence to when they had acquired provisional driving experience. Just over 25 per cent of participants in study two reported that they had been caught committing driving offences while on their provisional licence. Nearly one-third of participants had crashed while driving on a provisional licence, although few of these crashes resulted in injuries or hospitalisations. To complement the first two studies, the third study examined the experiences of supervisors of learner drivers, as well as their perceptions of their learner’s experiences. This study was undertaken after the introduction of the new graduated driver licensing systems in Queensland and New South Wales in mid- 2007, providing insights into the impacts of these changes from the perspective of supervisors. The third study involved an internet survey of 552 supervisors of learner drivers. Within the sample, approximately 50 per cent of participants supervised their own child. Other supervisors of the learner drivers included other parents or stepparents, professional driving instructors and siblings. For two-thirds of the sample, this was the first learner driver that they had supervised. Participants had provided an average of 54.82 hours (sd = 67.19) of supervision. Seventy-three per cent of participants indicated that their learners’ logbooks were accurate or very accurate in most cases, although parents were more likely than non-parents to report that their learners’ logbook was accurate (F (1,546) = 7.74, p = .006). There was no difference between parents and non-parents regarding whether they believed the log book system was effective (F (1,546) = .01, p = .913). The majority of the sample reported that their learner driver had had some professional driving lessons. Notwithstanding this, a significant proportion (72.5 per cent) believed that parents should be either very involved or involved in teaching their child to drive, with parents being more likely than non-parents to hold this belief. In the post mid-2007 graduated driver licensing system, Queensland learner drivers are able to record three hours of supervised practice in their log book for every hour that is completed with a professional driving instructor, up to a total of ten hours. Despite this, there was no difference identified between Queensland and New South Wales participants regarding the amount of time that they reported their learners spent with professional driving instructors (X2(1) = 2.56, p = .110). Supervisors from New South Wales were more likely to ensure that their learner driver complied with the road laws. Additionally, with the exception of drug driving laws, New South Wales supervisors believed it was more important to teach safety-related behaviours such as remaining within the speed limit, car control and hazard perception than those from Queensland. This may be indicative of more intensive road safety educational efforts in New South Wales or the longer time that graduated driver licensing has operated in that jurisdiction. However, other factors may have contributed to these findings and further research is required to explore the issue. In addition, supervisors reported that their learner driver was involved in very few crashes (3.4 per cent) and offences (2.7 per cent). This relatively low reported crash rate is similar to that identified in the first study. Most of the graduated driver licensing research to date has been applied in nature and lacked a strong theoretical foundation. These studies used Akers’ social learning theory to explore the self-reported behaviour of novice drivers and their supervisors. This theory was selected as it has previously been found to provide a relatively comprehensive framework for explaining a range of driver behaviours including novice driver behaviour. Sensation seeking was also used in the first two studies to complement the non-social rewards component of Akers’ social learning theory. This program of research identified that both Akers’ social learning theory and sensation seeking were useful in predicting the behaviour of learner and provisional drivers over and above socio-demographic factors. Within the first study, Akers’ social learning theory accounted for an additional 22 per cent of the variance in learner driver compliance with the law, over and above a range of socio-demographic factors such as age, gender and income. The two constructs within Akers’ theory which were significant predictors of learner driver compliance were the behavioural dimension of differential association relating to friends, and anticipated rewards. Sensation seeking predicted an additional six per cent of the variance in learner driver compliance with the law. When considering a learner driver’s intention to comply with the law while driving on a provisional licence, Akers’ social learning theory accounted for an additional 10 per cent of the variance above socio-demographic factors with anticipated rewards being a significant predictor. Sensation seeking predicted an additional four per cent of the variance. The results suggest that the more rewards individuals anticipate for complying with the law, the more likely they are to obey the road rules. Further research is needed to identify which specific rewards are most likely to encourage novice drivers’ compliance with the law. In the second study, Akers’ social learning theory predicted an additional 40 per cent of the variance in self-reported compliance with road rules over and above socio-demographic factors while sensation seeking accounted for an additional five per cent of the variance. A number of Aker’s social learning theory constructs significantly predicted provisional driver compliance with the law, including the behavioural dimension of differential association for friends, the normative dimension of differential association, personal attitudes and anticipated punishments. The consistent prediction of additional variance by sensation seeking over and above the variables within Akers’ social learning theory in both studies one and two suggests that sensation seeking is not fully captured within the non social rewards dimension of Akers’ social learning theory, at least for novice drivers. It appears that novice drivers are strongly influenced by the desire to engage in new and intense experiences. While socio-demographic factors and the perception of risk associated with driving had an important role in predicting the behaviour of the supervisors of learner drivers, Akers’ social learning theory provided further levels of prediction over and above these factors. The Akers’ social learning theory variables predicted an additional 14 per cent of the variance in the extent to which supervisors ensured that their learners complied with the law and an additional eight per cent of the variance in the supervisors’ provision of a range of practice experiences. The normative dimension of differential association, personal attitudes towards the use of professional driving instructors and anticipated rewards were significant predictors for supervisors ensuring that their learner complied with the road laws, while the normative dimension was important for range of practice. This suggests that supervisors who engage with other supervisors who ensure their learner complies with the road laws and provide a range of practice to their own learners are more likely to also engage in these behaviours. Within this program of research, there were several limitations including the method of recruitment of participants within the first study, the lower participation rate in the second study, an inability to calculate a response rate for study three and the use of self-report data for all three studies. Within the first study, participants were only recruited from larger driver licensing centres to ensure that there was a sufficient throughput of drivers to approach. This may have biased the results due to the possible differences in learners that obtain their licences in locations with smaller licensing centres. Only 59.4 per cent of the sample in the first study completed the second study. This may be a limitation if there was a common reason why those not participating were unable to complete the interview leading to a systematic impact on the results. The third study used a combination of a convenience and snowball sampling which meant that it was not possible to calculate a response rate. All three studies used self-report data which, in many cases, is considered a limitation. However, self-report data may be the only method that can be used to obtain some information. This program of research has a number of implications for countermeasures in both the learner licence phase and the provisional licence phase. During the learner phase, licensing authorities need to carefully consider the number of hours that they mandate learner drivers must complete before they obtain their provisional driving licence. If they mandate an insufficient number of hours, there may be inadvertent negative effects as a result of setting too low a limit. This research suggests that logbooks may be a useful tool for learners and their supervisors in recording and structuring their supervised practice. However, it would appear that the usage rates for logbooks will remain low if they remain voluntary. One strategy for achieving larger amounts of supervised practice is for learner drivers and their supervisors to make supervised practice part of their everyday activities. As well as assisting the learner driver to accumulate the required number of hours of supervised practice, it would ensure that they gain experience in the types of environments that they will probably encounter when driving unaccompanied in the future, such as to and from education or work commitments. There is also a need for policy processes to ensure that parents and professional driving instructors communicate effectively regarding the learner driver’s progress. This is required as most learners spend at least some time with a professional instructor despite receiving significant amounts of practice with a private supervisor. However, many supervisors did not discuss their learner’s progress with the driving instructor. During the provisional phase, there is a need to strengthen countermeasures to address the high crash risk of these drivers. Although many of these crashes are minor, most involve at least one other vehicle. Therefore, there are social and economic benefits to reducing these crashes. If the new, post-2007 graduated driver licensing systems do not significantly reduce crash risk, there may be a need to introduce further provisional licence restrictions such as separate night driving and peer passenger restrictions (as opposed to the hybrid version of these two restrictions operating in both Queensland and New South Wales). Provisional drivers appear to be more likely to obey some provisional licence laws, such as lower blood alcohol content limits, than others such as speed limits. Therefore, there may be a need to introduce countermeasures to encourage provisional drivers to comply with specific restrictions. When combined, these studies provided significant information regarding graduated driver licensing programs. This program of research has investigated graduated driver licensing utilising a cross-sectional and longitudinal design in order to develop our understanding of the experiences of novice drivers that progress through the system in order to help reduce crash risk once novice drivers commence driving by themselves.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Driving and using prescription medicines that have the potential to impair driving is an emerging research area. To date it is characterised by a limited (although growing) number of studies and methodological complexities that make generalisations about impairment due to medications difficult. Consistent evidence has been found for the impairing effects of hypnotics, sedative antidepressants and antihistamines, and narcotic analgesics, although it has been estimated that as many as nine medication classes have the potential to impair driving (Alvarez & del Rio, 2000; Walsh, de Gier, Christopherson, & Verstraete, 2004). There is also evidence for increased negative effects related to concomitant use of other medications and alcohol (Movig et al., 2004; Pringle, Ahern, Heller, Gold, & Brown, 2005). Statistics on the high levels of Australian prescription medication use suggest that consumer awareness of driving impairment due to medicines should be examined. One web-based study has found a low level of awareness, knowledge and risk perceptions among Australian drivers about the impairing effects of various medications on driving (Mallick, Johnston, Goren, & Kennedy, 2007). The lack of awareness and knowledge brings into question the effectiveness of the existing countermeasures. In Australia these consist of the use of ancillary warning labels administered under mandatory regulation and professional guidelines, advice to patients, and the use of Consumer Medicines Information (CMI) with medications that are known to cause impairment. The responsibility for the use of the warnings and related counsel to patients primarily lies with the pharmacist when dispensing relevant medication. A review by the Therapeutic Goods Administration (TGA) noted that in practice, advice to patients may not occur and that CMI is not always available (TGA, 2002). Researchers have also found that patients' recall of verbal counsel is very low (Houts, Bachrach, Witmer, Tringali, Bucher, & Localio, 1998). With healthcare observed as increasingly being provided in outpatient conditions (Davis et al., 2006; Vingilis & MacDonald, 2000), establishing the effectiveness of the warning labels as a countermeasure is especially important. There have been recent international developments in medication categorisation systems and associated medication warning labels. In 2005, France implemented a four-tier medication categorisation and warning system to improve patients' and health professionals' awareness and knowledge of related road safety issues (AFSSAPS, 2005). This warning system uses a pictogram and indicates the level of potential impairment in relation to driving performance through the use of colour and advice on the recommended behaviour to adopt towards driving. The comparable Australian system does not indicate the severity level of potential effects, and does not provide specific guidelines on the attitude or actions that the individual should adopt towards driving. It is reliant upon the patient to be vigilant in self-monitoring effects, to understand the potential ways in which they may be affected and how serious these effects may be, and to adopt the appropriate protective actions. This thesis investigates the responses of a sample of Australian hospital outpatients who receive appropriate labelling and counselling advice about potential driving impairment due to prescribed medicines. It aims to provide baseline data on the understanding and use of relevant medications by a Queensland public hospital outpatient sample recruited through the hospital pharmacy. It includes an exploration and comparison of the effect of the Australian and French medication warning systems on medication user knowledge, attitudes, beliefs and behaviour, and explores whether there are areas in which the Australian system may be improved by including any beneficial elements of the French system. A total of 358 outpatients were surveyed, and a follow-up telephone survey was conducted with a subgroup of consenting participants who were taking at least one medication that required an ancillary warning label about driving impairment. A complementary study of 75 French hospital outpatients was also conducted to further investigate the performance of the warnings. Not surprisingly, medication use among the Australian outpatient sample was high. The ancillary warning labels required to appear on medications that can impair driving were prevalent. A subgroup of participants was identified as being potentially at-risk of driving impaired, based on their reported recent use of medications requiring an ancillary warning label and level of driving activity. The sample reported previous behaviour and held future intentions that were consistent with warning label advice and health protective action. Participants did not express a particular need for being advised by a health professional regarding fitness to drive in relation to their medication. However, it was also apparent from the analysis that the participants would be significantly more likely to follow advice from a doctor than a pharmacist. High levels of knowledge in terms of general principles about effects of alcohol, illicit drugs and combinations of substances, and related health and crash risks were revealed. This may reflect a sample specific effect. Emphasis is placed in the professional guidelines for hospital pharmacists that make it essential that advisory labels are applied to medicines where applicable and that warning advice is given to all patients on medication which may affect driving (SHPA, 2006, p. 221). The research program applied selected theoretical constructs from Schwarzer's (1992) Health Action Process Approach, which has extended constructs from existing health theories such as the Theory of Planned Behavior (Ajzen, 1991) to better account for the intention-behaviour gap often observed when predicting behaviour. This was undertaken to explore the utility of the constructs in understanding and predicting compliance intentions and behaviour with the mandatory medication warning about driving impairment. This investigation revealed that the theoretical constructs related to intention and planning to avoid driving if an effect from the medication was noticed were useful. Not all the theoretical model constructs that had been demonstrated to be significant predictors in previous research on different health behaviours were significant in the present analyses. Positive outcome expectancies from avoiding driving were found to be important influences on forming the intention to avoid driving if an effect due to medication was noticed. In turn, intention was found to be a significant predictor of planning. Other selected theoretical constructs failed to predict compliance with the Australian warning label advice. It is possible that the limited predictive power of a number of constructs including risk perceptions is due to the small sample size obtained at follow up on which the evaluation is based. Alternately, it is possible that the theoretical constructs failed to sufficiently account for issues of particular relevance to the driving situation. The responses of the Australian hospital outpatient sample towards the Australian and French medication warning labels, which differed according to visual characteristics and warning message, were examined. In addition, a complementary study with a sample of French hospital outpatients was undertaken in order to allow general comparisons concerning the performance of the warnings. While a large amount of research exists concerning warning effectiveness, there is little research that has specifically investigated medication warnings relating to driving impairment. General established principles concerning factors that have been demonstrated to enhance warning noticeability and behavioural compliance have been extrapolated and investigated in the present study. The extent to which there is a need for education and improved health messages on this issue was a core issue of investigation in this thesis. Among the Australian sample, the size of the warning label and text, and red colour were the most visually important characteristics. The pictogram used in the French labels was also rated highly, and was salient for a large proportion of the sample. According to the study of French hospital outpatients, the pictogram was perceived to be the most important visual characteristic. Overall, the findings suggest that the Australian approach of using a combination of visual characteristics was important for the majority of the sample but that the use of a pictogram could enhance effects. A high rate of warning recall was found overall and a further important finding was that higher warning label recall was associated with increased number of medication classes taken. These results suggest that increased vigilance and care are associated with the number of medications taken and the associated repetition of the warning message. Significantly higher levels of risk perception were found for the French Level 3 (highest severity) label compared with the comparable mandatory Australian ancillary Label 1 warning. Participants' intentions related to the warning labels indicated that they would be more cautious while taking potentially impairing medication displaying the French Level 3 label compared with the Australian Label 1. These are potentially important findings for the Australian context regarding the current driving impairment warnings about displayed on medication. The findings raise other important implications for the Australian labelling context. An underlying factor may be the differences in the wording of the warning messages that appear on the Australian and French labels. The French label explicitly states "do not drive" while the Australian label states "if affected, do not drive", and the difference in responses may reflect that less severity is perceived where the situation involves the consumer's self-assessment of their impairment. The differences in the assignment of responsibility by the Australian (the consumer assesses and decides) and French (the doctor assesses and decides) approaches for the decision to drive while taking medication raises the core question of who is most able to assess driving impairment due to medication: the consumer, or the health professional? There are pros and cons related to knowledge, expertise and practicalities with either option. However, if the safety of the consumer is the primary aim, then the trend towards stronger risk perceptions and more consistent and cautious behavioural intentions in relation to the French label suggests that this approach may be more beneficial for consumer safety. The observations from the follow-up survey, although based on a small sample size and descriptive in nature, revealed that just over half of the sample recalled seeing a warning label about driving impairment on at least one of their medications. The majority of these respondents reported compliance with the warning advice. However, the results indicated variation in responses concerning alcohol intake and modifying the dose of medication or driving habits so that they could continue to drive, which suggests that the warning advice may not be having the desired impact. The findings of this research have implications for current countermeasures in this area. These have included enhancing the role that prescribing doctors have in providing warnings and advice to patients about the impact that their medication can have on driving, increasing consumer perceptions of the authority of pharmacists on this issue, and the reinforcement of the warning message. More broadly, it is suggested that there would be benefit in a wider dissemination of research-based information on increased crash risk and systematic monitoring and publicity about the representation of medications in crashes resulting in injuries and fatalities. Suggestions for future research concern the continued investigation of the effects of medications and interactions with existing medical conditions and other substances on driving skills, effects of variations in warning label design, individual behaviours and characteristics (particularly among those groups who are dependent upon prescription medication) and validation of consumer self-assessment of impairment.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Deprivation assessed using the index of multiple deprivation (IMD) has been shown to be an independent risk factor for 1-year mortality in outpatients with chronic obstructive pulmonary disease; COPD (Collins et al, 2010). IMD combines a number of economic and social issues (eg, health, education, employment) into one overall deprivation score, the higher the score the higher an individual's deprivation. Whilst malnutrition in COPD has been linked to increased healthcare use it is not clear if deprivation is also independently associated. This study aimed to investigate the influence of deprivation on 1-year healthcare utilisation in outpatients with COPD. IMD was established in 424 outpatients with COPD according to the geographical location for each patient's address (postcode) and related to their healthcare use in the year post-date screened (Nobel et al, 2008). Patients were routinely screened in outpatient clinics for malnutrition using the ‘Malnutrition Universal Screening Tool’, ‘MUST’ (Elia 2003); mean age 73 (SD 9.9) years; body mass index 25.8 (SD 6.3) kg/m2 with healthcare use collected 1 year from screening (Abstract P147 Table 1). Deprivation assessed using IMD (mean 15.9; SD 11.1) was found to be a significant predictor for the frequency and duration of emergency hospital admissions as well as the duration of elective hospital admission. Deprivation was also linked to reduced secondary care outpatient appointment attendance but not an increase in failure to attend and deprivation was not associated with increased disease severity, as classified by the GOLD criteria (p=0.580). COPD outpatients residing in more deprived areas experience increased hospitalisation rates but decreased outpatient appointment attendance. The underlying reason behind this disparity in healthcare use requires further investigation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, the multi-term time-fractional wave diffusion equations are considered. The multiterm time fractional derivatives are defined in the Caputo sense, whose orders belong to the intervals [0,1], [1,2), [0,2), [0,3), [2,3) and [2,4), respectively. Some computationally effective numerical methods are proposed for simulating the multi-term time-fractional wave-diffusion equations. The numerical results demonstrate the effectiveness of theoretical analysis. These methods and techniques can also be extended to other kinds of the multi-term fractional time-space models with fractional Laplacian.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Introduction:  Smoking status in outpatients with chronic obstructive pulmonary disease (COPD) has been associated with a low body mass index (BMI) and reduced mid-arm muscle circumference (Cochrane & Afolabi, 2004). Individuals with COPD identified as malnourished have also been found to be twice as likely to die within 1 year compared to non-malnourished patients (Collins et al., 2010). Although malnutrition is both preventable and treatable, it is not clear what influence current smoking status, another modifiable risk factor, has on malnutrition risk. The current study aimed to establish the influence of smoking status on malnutrition risk and 1-year mortality in outpatients with COPD. Methods:  A prospective nutritional screening survey was carried out between July 2008 and May 2009 at a large teaching hospital (Southampton General Hospital) and a smaller community hospital within Hampshire (Lymington New Forest Hospital). In total, 424 outpatients with a diagnosis of COPD were routinely screened using the ‘Malnutrition Universal Screening Tool’, ‘MUST’ (Elia, 2003); 222 males, 202 females; mean (SD) age 73 (9.9) years; mean (SD) BMI 25.9 (6.4) kg m−2. Smoking status on the date of screening was obtained for 401 of the outpatients. Severity of COPD was assessed using the GOLD criteria, and social deprivation determined using the Index of Multiple Deprivation (Nobel et al., 2008). Results:  The overall prevalence of malnutrition (medium + high risk) was 22%, with 32% of current smokers at risk (who accounted for 19% of the total COPD population). In comparison, 19% of nonsmokers and ex-smokers were likely to be malnourished [odds ratio, 1.965; 95% confidence interval (CI), 1.133–3.394; P = 0.015]. Smoking status remained an independent risk factor for malnutrition even after adjustment for age, social deprivation and disease-severity (odds ratio, 2.048; 95% CI, 1.085–3.866; P = 0.027) using binary logistic regression. After adjusting for age, disease severity, social deprivation, smoking status, malnutrition remained a significant predictor of 1-year mortality [odds ratio (medium + high risk versus low risk), 2.161; 95% CI, 1.021–4.573; P = 0.044], whereas smoking status did not (odds ratio for smokers versus ex-smokers + nonsmokers was 1.968; 95% CI, 0.788–4.913; P = 0.147). Discussion:  This study highlights the potential importance of combined nutritional support and smoking cessation in order to treat malnutrition. The close association between smoking status and malnutrition risk in COPD suggests that smoking is an important consideration in the nutritional management of malnourished COPD outpatients. Conclusions:  Smoking status in COPD outpatients is a significant independent risk factor for malnutrition and a weaker (nonsignificant) predictor of 1-year mortality. Malnutrition significantly predicted 1 year mortality. References:  Cochrane, W.J. & Afolabi, O.A. (2004) Investigation into the nutritional status, dietary intake and smoking habits of patients with chronic obstructive pulmonary disease. J. Hum. Nutr. Diet.17, 3–11. Collins, P.F., Stratton, R.J., Kurukulaaratchym R., Warwick, H. Cawood, A.L. & Elia, M. (2010) ‘MUST’ predicts 1-year survival in outpatients with chronic obstructive pulmonary disease. Clin. Nutr.5, 17. Elia, M. (Ed) (2003) The ‘MUST’ Report. BAPEN. http://www.bapen.org.uk (accessed on March 30 2011). Nobel, M., McLennan, D., Wilkinson, K., Whitworth, A. & Barnes, H. (2008) The English Indices of Deprivation 2007. http://www.communities.gov.uk (accessed on March 30 2011).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

KLK15 over-expression is reported to be a significant predictor of reduced progression-free survival and overall survival in ovarian cancer. Our aim was to analyse the KLK15 gene for putative functional single nucleotide polymorphisms (SNPs) and assess the association of these and KLK15 HapMap tag SNPs with ovarian cancer survival. Results In silico analysis was performed to identify KLK15 regulatory elements and to classify potentially functional SNPs in these regions. After SNP validation and identification by DNA sequencing of ovarian cancer cell lines and aggressive ovarian cancer patients, 9 SNPs were shortlisted and genotyped using the Sequenom iPLEX Mass Array platform in a cohort of Australian ovarian cancer patients (N = 319). In the Australian dataset we observed significantly worse survival for the KLK15 rs266851 SNP in a dominant model (Hazard Ratio (HR) 1.42, 95% CI 1.02-1.96). This association was observed in the same direction in two independent datasets, with a combined HR for the three studies of 1.16 (1.00-1.34). This SNP lies 15bp downstream of a novel exon and is predicted to be involved in mRNA splicing. The mutant allele is also predicted to abrogate an HSF-2 binding site. Conclusions We provide evidence of association for the SNP rs266851 with ovarian cancer survival. Our results provide the impetus for downstream functional assays and additional independent validation studies to assess the role of KLK15 regulatory SNPs and KLK15 isoforms with alternative intracellular functional roles in ovarian cancer survival.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: Pre-participation screening is commonly used to measure and assess potential intrinsic injury risk. The single leg squat is one such clinical screening measure used to assess lumbopelvic stability and associated intrinsic injury risk. With the addition of a decline board, the single leg decline squat (SLDS) has been shown to reduce ankle dorsiflexion restrictions and allowed greater sagittal plane movement of the hip and knee. On this basis, the SLDS has been employed in the Cricket Australia physiotherapy screening protocols as a measure of lumbopelvic control in the place of the more traditional single leg flat squat (SLFS). Previous research has failed to demonstrate which squatting technique allows for a more comprehensive assessment of lumbopelvic stability. Tenuous links are drawn between kinematics and hip strength measures within the literature for the SLS. Formal evaluation of subjective screening methods has also been suggested within the literature. Purpose: This study had several focal points namely 1) to compare the kinematic differences between the two single leg squatting conditions, primarily the five key kinematic variables fundamental to subjectively assess lumbopelvic stability; 2) determine the effect of ankle dorsiflexion range of motion has on squat kinematics in the two squat techniques; 3) examine the association between key kinematics and subjective physiotherapists’ assessment; and finally 4) explore the association between key kinematics and hip strength. Methods: Nineteen (n=19) subjects performed five SLDS and five SLFS on each leg while being filmed by an 8 camera motion analysis system. Four hip strength measures (internal/external rotation and abd/adduction) and ankle dorsiflexion range of motion were measured using a hand held dynamometer and a goniometer respectively on 16 of these subjects. The same 16 participants were subjectively assessed by an experienced physiotherapist for lumbopelvic stability. Paired samples t-tests were performed on the five predetermined kinematic variables to assess the differences between squat conditions. A Bonferroni correction for multiple comparisons was used which adjusted the significance value to p = 0.005 for the paired t-tests. Linear regressions were used to assess the relationship between kinematics, ankle range of motion and hip strength measures. Bivariate correlations between hip strength measures and kinematics and pelvic obliquity were employed to investigate any possible relationships. Results: 1) Significant kinematic differences between squats were observed in dominant (D) and non-dominant (ND) end of range hip external rotation (ND p = <0.001; D p = 0.004) and hip adduction kinematics (ND p = <0.001; D p = <0.001). With the mean angle, only the non-dominant leg observed significant differences in hip adduction (p = 0.001) and hip external rotation (p = <0.001); 2) Significant linear relationships were observed between clinical measures of ankle dorsiflexion and sagittal plane kinematic namely SLFS dominant ankle (p = 0.006; R2 = .429), SLFS non-dominant knee (p = 0.015; R2 = .352) and SLFS non-dominant ankle (p = 0.027; R2 = .305) kinematics. Only the dominant ankle (p = 0.020; R2 = .331) was found to have a relationship with the decline squat. 3) Strength measures had tenuous associations with the subjective assessments of lumbopelvic stability with no significant relationships being observed. 4) For the non-dominant leg, external rotation strength and abduction strength were found to be significantly correlated with hip rotation kinematics (Newtons r = 0.458 p = 0.049; Normalised for bodyweight: r = 0.469; p = 0.043) and pelvic obliquity (normalised for bodyweight: r = 0.498 p = 0.030) respectively for the SLFS only. No significant relationships were observed in the dominant leg for either squat condition. Some elements of the hip strength screening protocols had linear relationships with kinematics of the lower limb, particularly the sagittal plane movements of the knee and ankle. Strength measures had tenuous associations with the subjective assessments of lumbopelvic stability with no significant relationships being observed; Discussion: The key finding of this study illustrated that kinematic differences can occur at the hip without significant kinematic differences at the knee as a result of the introduction of a decline board. Further observations reinforce the role of limited ankle dorsiflexion range of motion on sagittal plane movement of the hip and knee and in turn multiplanar kinematics of the lower limb. The kinematic differences between conditions have clinical implications for screening protocols that employ frontal plane movement of the knee as a guide for femoral adduction and rotation. Subjects who returned stronger hip strength measurements also appeared to squat deeper as characterised by differences in sagittal plane kinematics of the knee and ankle. Despite the aforementioned findings, the relationship between hip strength and lower limb kinematics remains largely tenuous in the assessment of the lumbopelvic stability using the SLS. The association between kinematics and the subjective measures of lumbopelvic stability also remain tenuous between and within SLS screening protocols. More functional measures of hip strength are needed to further investigate these relationships. Conclusion: The type of SLS (flat or decline) should be taken into account when screening for lumbopelvic stability. Changes to lower limb kinematics, especially around the hip and pelvis, were observed with the introduction of a decline board despite no difference in frontal plane knee movements. Differences in passive ankle dorsiflexion range of motion yielded variations in knee and ankle kinematics during a self-selected single leg squatting task. Clinical implications of removing posterior ankle restraints and using the knee as a guide to illustrate changes at the hip may result in inaccurate screening of lumbopelvic stability. The relationship between sagittal plane lower limb kinematics and hip strength may illustrate that self-selected squat depth may presumably be a useful predictor of the lumbopelvic stability. Further research in this area is required.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The overall objective of this thesis is to explore how and why the content of individuals' psychological contracts changes over time. The contract is generally understood as "individual beliefs, shaped by the organisation, regarding the terms of an exchange agreement between individuals and their organisation" (Rousseau, 1995, p. 9). With an overall study sampling frame of 320 graduate organisational newcomers, a mixed method longitudinal research design comprised of three sequential, inter-related studies is employed in order to capture the change process. From the 15 semi-structured interviews conducted in Study 1, the key findings included identifying a relatively high degree of mutuality between employees' and their managers' reciprocal contract beliefs around the time of organisational entry. Also, at this time, individuals had developed specific components of their contract content through a mix of social network information (regarding broader employment expectations) and perceptions of various elements of their particular organisation's reputation (for more firm-specific expectations). Study 2 utilised a four-wave survey approach (available to the full sampling frame) over the 14 months following organisational entry to explore the 'shape' of individuals' contract change trajectories and the role of four theorised change predictors in driving these trajectories. The predictors represented an organisational-level informational cue (perceptions of corporate reputation), a dyadic-level informational cue (perceptions of manager-employee relationship quality) and two individual difference variables (affect and hardiness). Through the use of individual growth modelling, the findings showed differences in the general change patterns across contract content components of perceived employer (exhibiting generally quadratic change patterns) and employee (exhibiting generally no-change patterns) obligations. Further, individuals differentially used the predictor variables to construct beliefs about specific contract content. While both organisational- and dyadic-level cues were focused upon to construct employer obligation beliefs, organisational-level cues and individual difference variables were focused upon to construct employee obligation beliefs. Through undertaking 26 semi-structured interviews, Study 3 focused upon gaining a richer understanding of why participants' contracts changed, or otherwise, over the study period, with a particular focus upon the roles of breach and violation. Breach refers to an employee's perception that an employer obligation has not been met and violation refers to the negative and affective employee reactions which may ensue following a breach. The main contribution of these findings was identifying that subsequent to a breach or violation event a range of 'remediation effects' could be activated by employees which, depending upon their effectiveness, served to instigate either breach or contract repair or both. These effects mostly instigated broader contract repair and were generally cognitive strategies enacted by an individual to re-evaluate the breach situation and re-focus upon other positive aspects of the employment relationship. As such, the findings offered new evidence for a clear distinction between remedial effects which serve to only repair the breach (and thus the contract) and effects which only repair the contract more broadly; however, when effective, both resulted in individuals again viewing their employment relationships positively. Overall, in response to the overarching research question of this thesis, how and why individuals' psychological contract beliefs change, individuals do indeed draw upon various information sources, particularly at the organisational-level, as cues or guides in shaping their contract content. Further, the 'shapes' of the changes in beliefs about employer and employee obligations generally follow different, and not necessarily linear, trajectories over time. Finally, both breach and violation and also remedial actions, which address these occurrences either by remedying the breach itself (and thus the contract) or the contract only, play central roles in guiding individuals' contract changes to greater or lesser degrees. The findings from the thesis provide both academics and practitioners with greater insights into how employees construct their contract beliefs over time, the salient informational cues used to do this and how the effects of breach and violation can be mitigated through creating an environment which facilitates the use of effective remediation strategies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recent empirical research has found that the psychological consequences for young people involved in cyberbullying are more severe than in the case of traditional bullying (Campbell, Spears, Slee, Butler, & Kift, 2012; Perren, Dooley, Shaw, & Cross, 2010). Cybervictimisation has been found to be a significant predictor of depressive symptoms over and above that of being victimised by traditional bullying (Perren et al., 2010). Cybervictims also have reported higher anxiety scores and social difficulties than traditional victims, with those students who had been bullied by both forms showing similar anxiety and depression scores to cyberbullying victims (Campbell et al., 2012). This is supported by the subjective views of many young people, not involved in bullying, who believed that cyberbullying is far more harmful than traditional bullying (Cross et al., 2009). However, students who were traditionally bullied thought the consequences of traditional bullying were harsher than did those students who were cyberbullied (Campbell, et al., 2012). In Slonje and Smith’s study (2008), students reported that text messaging and email bullying had less of an impact than traditional bullying, but that bullying by pictures or video clips had more negative impact than traditional bullying.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objective: The aim of this paper is to propose a ‘Perceived barriers and lifestyle risk factor modification model’ that could be incorporated into existing frameworks for diabetes education to enhance lifestyle risk factor education in women. Setting: Diabetes education, community health. Primary argument: ‘Perceived barriers’ is a health promotion concept that has been found to be a significant predictor of health promotion behaviour. There is evidence that women face a range of perceived barriers that prevent them from engaging in healthy lifestyle activities. Despite this, current evidence based models of diabetes education do not explicitly incorporate the concept of perceived barriers. A model of risk factor reduction that incorporates ‘perceived barriers’ is proposed. Conclusion: Although further research is required, current approaches to risk factor reduction in type 2 diabetes could be enhanced by identification and goal setting to reduce an individual’s perceived barriers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There is general agreement in the scientific community that entrepreneurship plays a central role in the growth and development of an economy in rapidly changing environments (Acs & Virgill 2010). In particular, when business activities are regarded as a vehicle for sustainable growth at large, that goes beyond mere economic returns of singular entities, encompassing also social problems and heavily relying on collaborative actions, then we more precisely fall into the domain of ‘social entrepreneurship’(Robinson et al. 2009). In the entrepreneurship literature, prior studies demonstrated the role of intentionality as the best predictor of planned behavior (Ajzen 1991), and assumed that the intention to start a business derives from the perception of desirability and feasibility and from a propensity to act upon an opportunity (Fishbein & Ajzen 1975). Recognizing that starting a business is an intentional act (Krueger et al. 2000) and entrepreneurship is a planned behaviour (Katz & Gartner 1988), models of entrepreneurial intentions have substantial implications for intentionality research in entrepreneurship. The purpose of this paper is to explore the emerging practice of social entrepreneurship by comparing the determinants of entrepreneurial intention in general versus those leading to startups with a social mission. Social entrepreneurial intentions clearly merit to be investigated given that the opportunity identification process is an intentional process not only typical of for profit start-ups, and yet there is a lack of research examining opportunity recognition in social entrepreneurship (Haugh 2005). The key argument is that intentionality in both traditional and social entrepreneurs during the decision-making process of new venture creation is influenced by an individual's perceptions toward opportunities (Fishbein & Ajzen 1975). Besides opportunity recognition, at least two other aspects can substantially influence intentionality: human and social capital (Davidsson, 2003). This paper is set to establish if and to what extent the social intentions of potential entrepreneurs, at the cognitive level, are influenced by opportunities recognition, human capital, and social capital. By applying established theoretical constructs, the paper draws comparisons between ‘for-profit’ and ‘social’ intentionality using two samples of students enrolled in Economy and Business Administration at the University G. d’Annunzio in Pescara, Italy. A questionnaire was submitted to 310 potential entrepreneurs to test the robustness of the model. The collected data were used to measure the theoretical constructs of the paper. Reliability of the multi-item scale for each dimension was measured using Cronbach alpha, and for all the dimensions measures of reliability are above 0.70. We empirically tested the model using structural equation modeling with AMOS. The results allow us to empirically contribute to the argument regarding the influence of human and social cognitive capital on social and non-social entrepreneurial intentions. Moreover, we highlight the importance for further researchers to look deeper into the determinants of traditional and social entrepreneurial intention so that governments can one day define better polices and regulations that promote sustainable businesses with a social imprint, rather than inhibit their formation and growth.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Aggressive driving is considered an important road-safety concern for drivers in highly motorised countries. However, understanding of the causes and maintenance factors fundamental to aggressive driving is limited. In keeping with theoretical advances from general aggression research such as the General Aggression Model (GAM), research has begun to examine the emotional and cognitive antecedents of aggressive driving in order to better understand the underlying processes motivating aggressive driving. Early findings in the driving area have suggested that greater levels of aggression are elicited in response to an intentionally aggressive on-road event. In contrast, general aggression research suggests that greater levels of aggression are elicited in response to an ambiguous event. The current study examined emotional and cognitive responses to two hypothetical driving scenarios with differing levels of aggressive intent (intentional versus ambiguous). There was also an interest in whether factors influencing responses were different for hostile aggression (that is, where the action is intended to harm the other) versus instrumental aggression (that is, where the action is motivated by an intention to remove an impediment or attain a goal). Results were that significantly stronger negative emotion and negative attributions, as well as greater levels of threat were reported in response to the scenario which was designed to appear intentional in nature. In addition, participants were more likely to endorse an aggressive behavioural response to a situation that appeared deliberately aggressive than to one where the intention was ambiguous. Analyses to determine if greater levels of negative emotions and cognitions are able to predict aggressive responses provided different patterns of results for instrumental aggression from those for hostile aggression. Specifically, for instrumental aggression, negative emotions and negative attributions were significant predictors for both the intentional and the ambiguous scenarios. In addition, perceived threat was also a significant predictor where the other driver’s intent was clearly aggressive. However, lower rather than higher, levels of perceived threat were associated with greater endorsement of an aggressive response. For hostile aggressive behavioural responses, trait aggression was the strongest predictor for both situations. Overall the results suggest that in the driving context, instrumental aggression is likely to be a much more common response than hostile aggression. Moreover, aggressive responses are more likely in situations where another driver’s behaviour is clearly intentional rather than ambiguous. The results also support the conclusion that there may be different underlying mechanisms motivating an instrumental aggressive response to those motivating a hostile one. In addition, understanding the emotions and cognitions underlying aggressive driving responses may be helpful in predicting and intervening to reduce driving aggression. The finding that drivers appear to regard tailgating as an instrumental response is of concern since this behaviour has the potential to result in crashes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose – The purpose of this paper is to summarise a successfully defended doctoral thesis. The main purpose of this paper is to provide a summary of the scope, and main issues raised in the thesis so that readers undertaking studies in the same or connected areas may be aware of current contributions to the topic. The secondary aims are to frame the completed thesis in the context of doctoral-level research in project management as well as offer ideas for further investigation which would serve to extend scientific knowledge on the topic. Design/methodology/approach – Research reported in this paper is based on a quantitative study using inferential statistics aimed at better understanding the actual and potential usage of earned value management (EVM) as applied to external projects under contract. Theories uncovered during the literature review were hypothesized and tested using experiential data collected from 145 EVM practitioners with direct experience on one or more external projects under contract that applied the methodology. Findings – The results of this research suggest that EVM is an effective project management methodology. The principles of EVM were shown to be significant positive predictors of project success on contracted efforts and to be a relatively greater positive predictor of project success when using fixed-price versus cost-plus (CP) type contracts. Moreover, EVM's work-breakdown structure (WBS) utility was shown to positively contribute to the formation of project contracts. The contribution was not significantly different between fixed-price and CP contracted projects, with exceptions in the areas of schedule planning and payment planning. EVM's “S” curve benefited the administration of project contracts. The contribution of the S-curve was not significantly different between fixed-price and CP contracted projects. Furthermore, EVM metrics were shown to also be important contributors to the administration of project contracts. The relative contribution of EVM metrics to projects under fixed-price versus CP contracts was not significantly different, with one exception in the area of evaluating and processing payment requests. Practical implications – These results have important implications for project practitioners, EVM advocates, as well as corporate and governmental policy makers. EVM should be considered for all projects – not only for its positive contribution to project contract development and administration, for its contribution to project success as well, regardless of contract type. Contract type should not be the sole determining factor in the decision whether or not to use EVM. More particularly, the more fixed the contracted project cost, the more the principles of EVM explain the success of the project. The use of EVM mechanics should also be used in all projects regardless of contract type. Payment planning using a WBS should be emphasized in fixed-price contracts using EVM in order to help mitigate performance risk. Schedule planning using a WBS should be emphasized in CP contracts using EVM in order to help mitigate financial risk. Similarly, EVM metrics should be emphasized in fixed-price contracts in evaluating and processing payment requests. Originality/value – This paper provides a summary of cutting-edge research work and a link to the published thesis that researchers can use to help them understand how the research methodology was applied as well as how it can be extended.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: Malaria is a significant threat to population health in the border areas of Yunnan Province, China. How to accurately measure malaria transmission is an important issue. This study aimed to examine the role of slide positivity rates (SPR) in malaria transmission in Mengla County, Yunnan Province, China. Methods: Data on annual malaria cases, SPR and socio-economic factors for the period of 1993 to 2008 were obtained from the Center for Disease Control and Prevention (CDC) and the Bureau of Statistics, Mengla, China. Multiple linear regression models were conducted to evaluate the relationship between socio-ecologic factors and malaria incidence. Results: The results show that SPR was significantly positively associated with the malaria incidence rates. The SPR (beta = 1.244, p = 0.000) alone and combination (SPR, beta = 1.326, p < 0.001) with other predictors can explain about 85% and 95% of variation in malaria transmission, respectively. Every 1% increase in SPR corresponded to an increase of 1.76/100,000 in malaria incidence rates. Conclusion: SPR is a strong predictor of malaria transmission, and can be used to improve the planning and implementation of malaria elimination programmes in Mengla and other similar locations. SPR might also be a useful indicator of malaria early warning systems in China.