449 resultados para p-Groups
Resumo:
Introduction The suitability of video conferencing (VC) technology for clinical purposes relevant to geriatric medicine is still being established. This project aimed to determine the validity of the diagnosis of dementia via VC. Methods This was a multisite, noninferiority, prospective cohort study. Patients, aged 50 years and older, referred by their primary care physician for cognitive assessment, were assessed at 4 memory disorder clinics. All patients were assessed independently by 2 specialist physicians. They were allocated one face-to-face (FTF) assessment (Reference standard – usual clinical practice) and an additional assessment (either usual FTF assessment or a VC assessment) on the same day. Each specialist physician had access to the patient chart and the results of a battery of standardized cognitive assessments administered FTF by the clinic nurse. Percentage agreement (P0) and the weighted kappa statistic with linear weight (Kw) were used to assess inter-rater reliability across the 2 study groups on the diagnosis of dementia (cognition normal, impaired, or demented). Results The 205 patients were allocated to group: Videoconference (n = 100) or Standard practice (n = 105); 106 were men. The average age was 76 (SD 9, 51–95) and the average Standardized Mini-Mental State Examination Score was 23.9 (SD 4.7, 9–30). Agreement for the Videoconference group (P0= 0.71; Kw = 0.52; P < .0001) and agreement for the Standard Practice group (P0= 0.70; Kw = 0.50; P < .0001) were both statistically significant (P < .05). The summary kappa statistic of 0.51 (P = .84) indicated that VC was not inferior to FTF assessment. Conclusions Previous studies have shown that preliminary standardized assessment tools can be reliably administered and scored via VC. This study focused on the geriatric assessment component of the interview (interpretation of standardized assessments, taking a history and formulating a diagnosis by medical specialist) and identified high levels of agreement for diagnosing dementia. A model of service incorporating either local or remote administered standardized assessments, and remote specialist assessment, is a reliable process for enabling the diagnosis of dementia for isolated older adults.
Resumo:
In this study, we explore motivation in collocated and virtual project teams. The literature on motivation in a project set.,ting reveals that motivation is closely linked to team performance. Based on this literature, we propose a set., of variables related to the three dimensions of ‘Nature of work’, ‘Rewards’, and ‘Communication’. Thirteen original variables in a sample size of 66 collocated and 66 virtual respondents are investigated using one tail t test and principal component analysis. We find that there are minimal differences between the two groups with respect to the above mentioned three dimensions. (p= .06; t=1.71). Further, a principal component analysis of the combined sample of collocated and virtual project environments reveals two factors- ‘Internal Motivating Factor’ related to work and work environment, and ‘External Motivating Factor’ related to the financial and non-financial rewards that explain 59.8% of the variance and comprehensively characterize motivation in collocated and virtual project environments. A ‘sense check’ of our interpretation of the results shows conformity with the theory and existing practice of project organization
Resumo:
Scientific efforts to understand and reduce the occurrence of road crashes continue to expand, particularly in the areas of vulnerable road user groups. Three groups that are receiving increasing attention within the literature are younger drivers, motorcyclists and older drivers. These three groups are at an elevated risk of being in a crash or seriously injured, and research continues to focus on the origins of this risk as well as the development of appropriate countermeasures to improve driving outcomes for these cohorts. However, it currently remains unclear what factors produce the largest contribution to crash risk or what countermeasures are likely to produce the greatest long term positive effects on road safety. This paper reviews research that has focused on the personal and environmental factors that increase crash risk for these groups as well as considers direction for future research in the respective areas. A major theme to emerge from this review is that while there is a plethora of individual and situational factors that influence the likelihood of crashes, these factors often combine in an additive manner to exacerbate the risk of both injury and fatality. Additionally, there are a number of risk factors that are pertinent for all three road user groups, particularly age and the level of driving experience. As a result, targeted interventions that address these factors are likely to maximise the flow-on benefits to a wider range of road users. Finally, there is a need for further research that aims to bridge the research-to-practice gap, in order to develop appropriate pathways to ensure that evidenced-based research is directly transferred to effective policies that improve safety outcomes.
Resumo:
Background & aims: The confounding effect of disease on the outcomes of malnutrition using diagnosis-related groups (DRG) has never been studied in a multidisciplinary setting. This study aims to determine the prevalence of malnutrition in a tertiary hospital in Singapore and its impact on hospitalization outcomes and costs, controlling for DRG. Methods: This prospective cohort study included a matched case control study. Subjective Global Assessment was used to assess the nutritional status on admission of 818 adults. Hospitalization outcomes over 3 years were adjusted for gender, age, ethnicity, and matched for DRG. Results: Malnourished patients (29%) had longer hospital stays (6.9 ± 7.3 days vs. 4.6 ± 5.6 days, p < 0.001) and were more likely to be readmitted within 15 days (adjusted relative risk = 1.9, 95%CI 1.1–3.2, p = 0.025). Within a DRG, the mean difference between actual cost of hospitalization and the average cost for malnourished patients was greater than well-nourished patients (p = 0.014). Mortality was higher in malnourished patients at 1 year (34% vs. 4.1 %), 2 years (42.6% vs. 6.7%) and 3 years (48.5% vs. 9.9%); p < 0.001 for all. Overall, malnutrition was a significant predictor of mortality (adjusted hazard ratio = 4.4, 95% CI 3.3-6.0, p < 0.001). Conclusions: Malnutrition was evident in up to one third of the inpatients and led to poor hospitalization outcomes and survival as well as increased costs of care, even after matching for DRG. Strategies to prevent and treat malnutrition in the hospital and post-discharge are needed.
Resumo:
Advanced prostate cancer is a common and generally incurable disease. Androgen deprivation therapy is used to treat advanced prostate cancer with good benefits to quality of life and regression of disease. Prostate cancer invariably progresses however despite ongoing treatment, to a castrate resistant state. Androgen deprivation is associated with a form of metabolic syndrome, which includes insulin resistance and hyperinsulinaemia. The mitogenic and anti-apoptotic properties of insulin acting through the insulin and hybrid insulin/IGF-1 receptors seem to have positive effects on prostate tumour growth. This pilot study was designed to assess any correlation between elevated insulin levels and progression to castrate resistant prostate cancer. Methods: 36 men receiving ADT for advanced prostate cancer were recruited, at various stages of their treatment, along with 47 controls, men with localised prostate cancer pre-treatment. Serum measurements of C-peptide (used as a surrogate marker for insulin production) were performed and compared between groups. Correlation between serum C-peptide level and time to progression to castrate resistant disease was assessed. Results: There was a significant elevation of C-peptide levels in the ADT group (mean = 1639pmol/L)) compared to the control group (mean = 1169pmol/L), with a p-value of 0.025. In 17 men with good initial response to androgen deprivation, a small negative trend towards earlier progression to castrate resistance with increasing C-peptide level was seen in the ADT group (r = -0.050), however this did not reach statistical significance (p>0.1). Conclusions: This pilot study confirms an increase in serum C-peptide levels in men receiving ADT for advance prostate cancer. A non-significant, but negative trend towards earlier progression to castrate resistance with increasing C-peptide suggests the need for a formal prospective study assessing this hypothesis.
Resumo:
The mosaic novel - with its independent 'story-tiles' linking together to form a complete narrative - has the potential to act as a reflection on the periodic resurfacing of unconscious memories in the conscious lives of fictional characters. This project is an exploration of the mosaic text as a fictional analogue of involuntary memory. These concepts are investigated as they appear in traditional fairy tales and engaged with in this thesis's creative component, Sourdough and Other Stories (approximately 80,000 words), a mosaic novel comprising sixteen interconnected 'story-tiles'. Traditional fairy tales are non-reflective and conducive to forgetting (i.e. anti-memory); fairy tale characters are frequently portrayed as psychologically two-dimensional, in that there is no examination of the mental and emotional distress caused when children are stolen/ abandoned/ lost and when adults are exiled. Sourdough and Other Stories is a creative examination of, and attempted to remedy, this lack of psychological depth. This creative work is at once something more than a short story collection, and something that is not a traditional novel, but instead a culmination of two modes of writing. It employs the fairy tale form to explore James' 'thorns in the spirit' (1898, p.199) in fiction; the anxiety caused by separation from familial and community groups. The exegesis, A Story Told in Parts - Sourdough and Other Stories is a critical essay (approximately 20,000 words in length), a companion piece to the mosaic novel, which analyses how my research question proceeded from my creative work, and considers the theoretical underpinnings of the creative work and how it enacts the research question: 'Can a writer use the structural possibilities of the mosaic text to create a fictional work that is an analogue of an involuntary memory?' The cumulative effect of the creative and exegetical works should be that of a dialogue between the two components - each text informing the other and providing alternate but complementary lenses with which to view the research question.
Resumo:
The human Ureaplasma species are the most frequently isolated bacteria from the upper genital tract of pregnant women and can cause clinically asymptomatic, intra-uterine infections, which are difficult to treat with antimicrobials. Ureaplasma infection of the upper genital tract during pregnancy has been associated with numerous adverse outcomes including preterm birth, chorioamnionitis and neonatal respiratory diseases. The mechanisms by which ureaplasmas are able to chronically colonise the amniotic fluid and avoid eradication by (i) the host immune response and (ii) maternally-administered antimicrobials, remain virtually unexplored. To address this gap within the literature, this study investigated potential mechanisms by which ureaplasmas are able to cause chronic, intra-amniotic infections in an established ovine model. In this PhD program of research the effectiveness of standard, maternal erythromycin for the treatment of chronic, intra-amniotic ureaplasma infections was evaluated. At 55 days of gestation pregnant ewes received an intra-amniotic injection of either: a clinical Ureaplasma parvum serovar 3 isolate that was sensitive to macrolide antibiotics (n = 16); or 10B medium (n = 16). At 100 days of gestation, ewes were then randomised to receive either maternal erythromycin treatment (30 mg/kg/day for four days) or no treatment. Ureaplasmas were isolated from amniotic fluid, chorioamnion, umbilical cord and fetal lung specimens, which were collected at the time of preterm delivery of the fetus (125 days of gestation). Surprisingly, the numbers of ureaplasmas colonising the amniotic fluid and fetal tissues were not different between experimentally-infected animals that received erythromycin treatment or infected animals that did not receive treatment (p > 0.05), nor were there any differences in fetal inflammation and histological chorioamnionitis between these groups (p > 0.05). These data demonstrate the inability of maternal erythromycin to eradicate intra-uterine ureaplasma infections. Erythromycin was detected in the amniotic fluid of animals that received antimicrobial treatment (but not in those that did not receive treatment) by liquid chromatography-mass spectrometry; however, the concentrations were below therapeutic levels (<10 – 76 ng/mL). These findings indicate that the ineffectiveness of standard, maternal erythromycin treatment of intra-amniotic ureaplasma infections may be due to the poor placental transfer of this drug. Subsequently, the phenotypic and genotypic characteristics of ureaplasmas isolated from the amniotic fluid and chorioamnion of pregnant sheep after chronic, intra-amniotic infection and low-level exposure to erythromycin were investigated. At 55 days of gestation twelve pregnant ewes received an intra-amniotic injection of a clinical U. parvum serovar 3 isolate, which was sensitive to macrolide antibiotics. At 100 days of gestation, ewes received standard maternal erythromycin treatment (30 mg/kg/day for four days, n = 6) or saline (n = 6). Preterm fetuses were surgically delivered at 125 days of gestation and ureaplasmas were cultured from the amniotic fluid and the chorioamnion. The minimum inhibitory concentrations (MICs) of erythromycin, azithromycin and roxithromycin were determined for cultured ureaplasma isolates, and antimicrobial susceptibilities were different between ureaplasmas isolated from the amniotic fluid (MIC range = 0.08 – 1.0 mg/L) and chorioamnion (MIC range = 0.06 – 5.33 mg/L). However, the increased resistance to macrolide antibiotics observed in chorioamnion ureaplasma isolates occurred independently of exposure to erythromycin in vivo. Remarkably, domain V of the 23S ribosomal RNA gene (which is the target site of macrolide antimicrobials) of chorioamnion ureaplasmas demonstrated significant variability (125 polymorphisms out of 422 sequenced nucleotides, 29.6%) when compared to the amniotic fluid ureaplasma isolates and the inoculum strain. This sequence variability did not occur as a consequence of exposure to erythromycin, as the nucleotide substitutions were identical between chorioamnion ureaplasmas isolated from different animals, including those that did not receive erythromycin treatment. We propose that these mosaic-like 23S ribosomal RNA gene sequences may represent gene fragments transferred via horizontal gene transfer. The significant differences observed in (i) susceptibility to macrolide antimicrobials and (ii) 23S ribosomal RNA sequences of ureaplasmas isolated from the amniotic fluid and chorioamnion suggests that the anatomical site from which they were isolated may exert selective pressures that alter the socio-microbiological structure of the bacterial population, by selecting for genetic changes and altered antimicrobial susceptibility profiles. The final experiment for this PhD examined antigenic size variation of the multiple banded antigen (MBA, a surface-exposed lipoprotein and predicted ureaplasmal virulence factor) in chronic, intra-amniotic ureaplasma infections. Previously defined ‘virulent-derived’ and ‘avirulent-derived’ clonal U. parvum serovar 6 isolates (each expressing a single MBA protein) were injected into the amniotic fluid of pregnant ewes (n = 20) at 55 days of gestation, and amniotic fluid was collected by amniocentesis every two weeks until the time of near-term delivery of the fetus (at 140 days of gestation). Both the avirulent and virulent clonal ureaplasma strains generated MBA size variants (ranging in size from 32 – 170 kDa) within the amniotic fluid of pregnant ewes. The mean number of MBA size variants produced within the amniotic fluid was not different between the virulent (mean = 4.2 MBA variants) and avirulent (mean = 4.6 MBA variants) ureaplasma strains (p = 0.87). Intra-amniotic infection with the virulent strain was significantly associated with the presence of meconium-stained amniotic fluid (p = 0.01), which is an indicator of fetal distress in utero. However, the severity of histological chorioamnionitis was not different between the avirulent and virulent groups. We demonstrated that ureaplasmas were able to persist within the amniotic fluid of pregnant sheep for 85 days, despite the host mounting an innate and adaptive immune response. Pro-inflammatory cytokines (interleukin (IL)-1â, IL-6 and IL-8) were elevated within the chorioamnion tissue of pregnant sheep from both the avirulent and virulent treatment groups, and this was significantly associated with the production of anti-ureaplasma IgG antibodies within maternal sera (p < 0.05). These findings suggested that the inability of the host immune response to eradicate ureaplasmas from the amniotic cavity may be due to continual size variation of MBA surface-exposed epitopes. Taken together, these data confirm that ureaplasmas are able to cause long-term in utero infections in a sheep model, despite standard antimicrobial treatment and the development of a host immune response. The overall findings of this PhD project suggest that ureaplasmas are able to cause chronic, intra-amniotic infections due to (i) the limited placental transfer of erythromycin, which prevents the accumulation of therapeutic concentrations within the amniotic fluid; (ii) the ability of ureaplasmas to undergo rapid selection and genetic variation in vivo, resulting in ureaplasma isolates with variable MICs to macrolide antimicrobials colonising the amniotic fluid and chorioamnion; and (iii) antigenic size variation of the MBA, which may prevent eradication of ureaplasmas by the host immune response and account for differences in neonatal outcomes. The outcomes of this program of study have improved our understanding of the biology and pathogenesis of this highly adapted microorganism.
Resumo:
Nutrition interventions in the form of both self-management education and individualised diet therapy are considered essential for the long-term management of type 2 diabetes mellitus (T2DM). The measurement of diet is essential to inform, support and evaluate nutrition interventions in the management of T2DM. Barriers inherent within health care settings and systems limit ongoing access to personnel and resources, while traditional prospective methods of assessing diet are burdensome for the individual and often result in changes in typical intake to facilitate recording. This thesis investigated the inclusion of information and communication technologies (ICT) to overcome limitations to current approaches in the nutritional management of T2DM, in particular the development, trial and evaluation of the Nutricam dietary assessment method (NuDAM) consisting of a mobile phone photo/voice application to assess nutrient intake in a free-living environment with older adults with T2DM. Study 1: Effectiveness of an automated telephone system in promoting change in dietary intake among adults with T2DM The effectiveness of an automated telephone system, Telephone-Linked Care (TLC) Diabetes, designed to deliver self-management education was evaluated in terms of promoting dietary change in adults with T2DM and sub-optimal glycaemic control. In this secondary data analysis independent of the larger randomised controlled trial, complete data was available for 95 adults (59 male; mean age(±SD)=56.8±8.1 years; mean(±SD)BMI=34.2±7.0kg/m2). The treatment effect showed a reduction in total fat of 1.4% and saturated fat of 0.9% energy intake, body weight of 0.7 kg and waist circumference of 2.0 cm. In addition, a significant increase in the nutrition self-efficacy score of 1.3 (p<0.05) was observed in the TLC group compared to the control group. The modest trends observed in this study indicate that the TLC Diabetes system does support the adoption of positive nutrition behaviours as a result of diabetes self-management education, however caution must be applied in the interpretation of results due to the inherent limitations of the dietary assessment method used. The decision to use a close-list FFQ with known bias may have influenced the accuracy of reporting dietary intake in this instance. This study provided an example of the methodological challenges experienced with measuring changes in absolute diet using a FFQ, and reaffirmed the need for novel prospective assessment methods capable of capturing natural variance in usual intakes. Study 2: The development and trial of NuDAM recording protocol The feasibility of the Nutricam mobile phone photo/voice dietary record was evaluated in 10 adults with T2DM (6 Male; age=64.7±3.8 years; BMI=33.9±7.0 kg/m2). Intake was recorded over a 3-day period using both Nutricam and a written estimated food record (EFR). Compared to the EFR, the Nutricam device was found to be acceptable among subjects, however, energy intake was under-recorded using Nutricam (-0.6±0.8 MJ/day; p<0.05). Beverages and snacks were the items most frequently not recorded using Nutricam; however forgotten meals contributed to the greatest difference in energy intake between records. In addition, the quality of dietary data recorded using Nutricam was unacceptable for just under one-third of entries. It was concluded that an additional mechanism was necessary to complement dietary information collected via Nutricam. Modifications to the method were made to allow for clarification of Nutricam entries and probing forgotten foods during a brief phone call to the subject the following morning. The revised recording protocol was evaluated in Study 4. Study 3: The development and trial of the NuDAM analysis protocol Part A explored the effect of the type of portion size estimation aid (PSEA) on the error associated with quantifying four portions of 15 single foods items contained in photographs. Seventeen dietetic students (1 male; age=24.7±9.1 years; BMI=21.1±1.9 kg/m2) estimated all food portions on two occasions: without aids and with aids (food models or reference food photographs). Overall, the use of a PSEA significantly reduced mean (±SD) group error between estimates compared to no aid (-2.5±11.5% vs. 19.0±28.8%; p<0.05). The type of PSEA (i.e. food models vs. reference food photograph) did not have a notable effect on the group estimation error (-6.7±14.9% vs. 1.4±5.9%, respectively; p=0.321). This exploratory study provided evidence that the use of aids in general, rather than the type, was more effective in reducing estimation error. Findings guided the development of the Dietary Estimation and Assessment Tool (DEAT) for use in the analysis of the Nutricam dietary record. Part B evaluated the effect of the DEAT on the error associated with the quantification of two 3-day Nutricam dietary records in a sample of 29 dietetic students (2 males; age=23.3±5.1 years; BMI=20.6±1.9 kg/m2). Subjects were randomised into two groups: Group A and Group B. For Record 1, the use of the DEAT (Group A) resulted in a smaller error compared to estimations made without the tool (Group B) (17.7±15.8%/day vs. 34.0±22.6%/day, p=0.331; respectively). In comparison, all subjects used the DEAT to estimate Record 2, with resultant error similar between Group A and B (21.2±19.2%/day vs. 25.8±13.6%/day; p=0.377 respectively). In general, the moderate estimation error associated with quantifying food items did not translate into clinically significant differences in the nutrient profile of the Nutricam dietary records, only amorphous foods were notably over-estimated in energy content without the use of the DEAT (57kJ/day vs. 274kJ/day; p<0.001). A large proportion (89.6%) of the group found the DEAT helpful when quantifying food items contained in the Nutricam dietary records. The use of the DEAT reduced quantification error, minimising any potential effect on the estimation of energy and macronutrient intake. Study 4: Evaluation of the NuDAM The accuracy and inter-rater reliability of the NuDAM to assess energy and macronutrient intake was evaluated in a sample of 10 adults (6 males; age=61.2±6.9 years; BMI=31.0±4.5 kg/m2). Intake recorded using both the NuDAM and a weighed food record (WFR) was coded by three dietitians and compared with an objective measure of total energy expenditure (TEE) obtained using the doubly labelled water technique. At the group level, energy intake (EI) was under-reported to a similar extent using both methods, with the ratio of EI:TEE was 0.76±0.20 for the NuDAM and 0.76±0.17 for the WFR. At the individual level, four subjects reported implausible levels of energy intake using the WFR method, compared to three using the NuDAM. Overall, moderate to high correlation coefficients (r=0.57-0.85) were found across energy and macronutrients except fat (r=0.24) between the two dietary measures. High agreement was observed between dietitians for estimates of energy and macronutrient derived for both the NuDAM (ICC=0.77-0.99; p<0.001) and WFR (ICC=0.82-0.99; p<0.001). All subjects preferred using the NuDAM over the WFR to record intake and were willing to use the novel method again over longer recording periods. This research program explored two novel approaches which utilised distinct technologies to aid in the nutritional management of adults with T2DM. In particular, this thesis makes a significant contribution to the evidence base surrounding the use of PhRs through the development, trial and evaluation of a novel mobile phone photo/voice dietary record. The NuDAM is an extremely promising advancement in the nutritional management of individuals with diabetes and other chronic conditions. Future applications lie in integrating the NuDAM with other technologies to facilitate practice across the remaining stages of the nutrition care process.
Resumo:
Newly licensed drivers on a provisional or intermediate licence have the highest crash risk when compared with any other group of drivers. In comparison, learner drivers have the lowest crash risk. Graduated driver licensing is one countermeasure that has been demonstrated to effectively reduce the crashes of novice drivers. This thesis examined the graduated driver licensing systems in two Australian states in order to better understand the behaviour of learner drivers, provisional drivers and the supervisors of learner drivers. By doing this, the thesis investigated the personal, social and environmental influences on novice driver behaviour as well as providing effective baseline data against which to measure subsequent changes to the licensing systems. In the first study, conducted prior to the changes to the graduated driver licensing system introduced in mid-2007, drivers who had recently obtained their provisional licence in Queensland and New South Wales were interviewed by telephone regarding their experiences while driving on their learner licence. Of the 687 eligible people approached to participate at driver licensing centres, 392 completed the study representing a response rate of 57.1 per cent. At the time the data was collected, New South Wales represented a more extensive graduated driver licensing system when compared with Queensland. The results suggested that requiring learners to complete a mandated number of hours of supervised practice impacts on the amount of hours that learners report completing. While most learners from New South Wales reported meeting the requirement to complete 50 hours of practice, it appears that many stopped practising soon after this goal was achieved. In contrast, learners from Queensland, who were not required to complete a specific number of hours at the time of the survey, tended to fall into three groups. The first group appeared to complete the minimum number of hours required to pass the test (less than 26 hours), the second group completed 26 to 50 hours of supervised practice while the third group completed significantly more practice than the first two groups (over 100 hours of supervised practice). Learner drivers in both states reported generally complying with the road laws and were unlikely to report that they had been caught breaking the road rules. They also indicated that they planned to obey the road laws once they obtained their provisional licence. However, they were less likely to intend to comply with recommended actions to reduce crash risk such as limiting their driving at night. This study also identified that there were relatively low levels of unaccompanied driving (approximately 15 per cent of the sample), very few driving offences committed (five per cent of the sample) and that learner drivers tended to use a mix of private and professional supervisors (although the majority of practice is undertaken with private supervisors). Consistent with the international literature, this study identified that very few learner drivers had experienced a crash (six per cent) while on their learner licence. The second study was also conducted prior to changes to the graduated driver licensing system and involved follow up interviews with the participants of the first study after they had approximately 21 months driving experience on their provisional licence. Of the 392 participants that completed the first study, 233 participants completed the second interview (representing a response rate of 59.4 per cent). As with the first study, at the time the data was collected, New South Wales had a more extensive graduated driver licensing system than Queensland. For instance, novice drivers from New South Wales were required to progress through two provisional licence phases (P1 and P2) while there was only one provisional licence phase in Queensland. Among the participants in this second study, almost all provisional drivers (97.9 per cent) owned or had access to a vehicle for regular driving. They reported that they were unlikely to break road rules, such as driving after a couple of drinks, but were also unlikely to comply with recommended actions, such as limiting their driving at night. When their provisional driving behaviour was compared to the stated intentions from the first study, the results suggested that their intentions were not a strong predictor of their subsequent behaviour. Their perception of risk associated with driving declined from when they first obtained their learner licence to when they had acquired provisional driving experience. Just over 25 per cent of participants in study two reported that they had been caught committing driving offences while on their provisional licence. Nearly one-third of participants had crashed while driving on a provisional licence, although few of these crashes resulted in injuries or hospitalisations. To complement the first two studies, the third study examined the experiences of supervisors of learner drivers, as well as their perceptions of their learner’s experiences. This study was undertaken after the introduction of the new graduated driver licensing systems in Queensland and New South Wales in mid- 2007, providing insights into the impacts of these changes from the perspective of supervisors. The third study involved an internet survey of 552 supervisors of learner drivers. Within the sample, approximately 50 per cent of participants supervised their own child. Other supervisors of the learner drivers included other parents or stepparents, professional driving instructors and siblings. For two-thirds of the sample, this was the first learner driver that they had supervised. Participants had provided an average of 54.82 hours (sd = 67.19) of supervision. Seventy-three per cent of participants indicated that their learners’ logbooks were accurate or very accurate in most cases, although parents were more likely than non-parents to report that their learners’ logbook was accurate (F (1,546) = 7.74, p = .006). There was no difference between parents and non-parents regarding whether they believed the log book system was effective (F (1,546) = .01, p = .913). The majority of the sample reported that their learner driver had had some professional driving lessons. Notwithstanding this, a significant proportion (72.5 per cent) believed that parents should be either very involved or involved in teaching their child to drive, with parents being more likely than non-parents to hold this belief. In the post mid-2007 graduated driver licensing system, Queensland learner drivers are able to record three hours of supervised practice in their log book for every hour that is completed with a professional driving instructor, up to a total of ten hours. Despite this, there was no difference identified between Queensland and New South Wales participants regarding the amount of time that they reported their learners spent with professional driving instructors (X2(1) = 2.56, p = .110). Supervisors from New South Wales were more likely to ensure that their learner driver complied with the road laws. Additionally, with the exception of drug driving laws, New South Wales supervisors believed it was more important to teach safety-related behaviours such as remaining within the speed limit, car control and hazard perception than those from Queensland. This may be indicative of more intensive road safety educational efforts in New South Wales or the longer time that graduated driver licensing has operated in that jurisdiction. However, other factors may have contributed to these findings and further research is required to explore the issue. In addition, supervisors reported that their learner driver was involved in very few crashes (3.4 per cent) and offences (2.7 per cent). This relatively low reported crash rate is similar to that identified in the first study. Most of the graduated driver licensing research to date has been applied in nature and lacked a strong theoretical foundation. These studies used Akers’ social learning theory to explore the self-reported behaviour of novice drivers and their supervisors. This theory was selected as it has previously been found to provide a relatively comprehensive framework for explaining a range of driver behaviours including novice driver behaviour. Sensation seeking was also used in the first two studies to complement the non-social rewards component of Akers’ social learning theory. This program of research identified that both Akers’ social learning theory and sensation seeking were useful in predicting the behaviour of learner and provisional drivers over and above socio-demographic factors. Within the first study, Akers’ social learning theory accounted for an additional 22 per cent of the variance in learner driver compliance with the law, over and above a range of socio-demographic factors such as age, gender and income. The two constructs within Akers’ theory which were significant predictors of learner driver compliance were the behavioural dimension of differential association relating to friends, and anticipated rewards. Sensation seeking predicted an additional six per cent of the variance in learner driver compliance with the law. When considering a learner driver’s intention to comply with the law while driving on a provisional licence, Akers’ social learning theory accounted for an additional 10 per cent of the variance above socio-demographic factors with anticipated rewards being a significant predictor. Sensation seeking predicted an additional four per cent of the variance. The results suggest that the more rewards individuals anticipate for complying with the law, the more likely they are to obey the road rules. Further research is needed to identify which specific rewards are most likely to encourage novice drivers’ compliance with the law. In the second study, Akers’ social learning theory predicted an additional 40 per cent of the variance in self-reported compliance with road rules over and above socio-demographic factors while sensation seeking accounted for an additional five per cent of the variance. A number of Aker’s social learning theory constructs significantly predicted provisional driver compliance with the law, including the behavioural dimension of differential association for friends, the normative dimension of differential association, personal attitudes and anticipated punishments. The consistent prediction of additional variance by sensation seeking over and above the variables within Akers’ social learning theory in both studies one and two suggests that sensation seeking is not fully captured within the non social rewards dimension of Akers’ social learning theory, at least for novice drivers. It appears that novice drivers are strongly influenced by the desire to engage in new and intense experiences. While socio-demographic factors and the perception of risk associated with driving had an important role in predicting the behaviour of the supervisors of learner drivers, Akers’ social learning theory provided further levels of prediction over and above these factors. The Akers’ social learning theory variables predicted an additional 14 per cent of the variance in the extent to which supervisors ensured that their learners complied with the law and an additional eight per cent of the variance in the supervisors’ provision of a range of practice experiences. The normative dimension of differential association, personal attitudes towards the use of professional driving instructors and anticipated rewards were significant predictors for supervisors ensuring that their learner complied with the road laws, while the normative dimension was important for range of practice. This suggests that supervisors who engage with other supervisors who ensure their learner complies with the road laws and provide a range of practice to their own learners are more likely to also engage in these behaviours. Within this program of research, there were several limitations including the method of recruitment of participants within the first study, the lower participation rate in the second study, an inability to calculate a response rate for study three and the use of self-report data for all three studies. Within the first study, participants were only recruited from larger driver licensing centres to ensure that there was a sufficient throughput of drivers to approach. This may have biased the results due to the possible differences in learners that obtain their licences in locations with smaller licensing centres. Only 59.4 per cent of the sample in the first study completed the second study. This may be a limitation if there was a common reason why those not participating were unable to complete the interview leading to a systematic impact on the results. The third study used a combination of a convenience and snowball sampling which meant that it was not possible to calculate a response rate. All three studies used self-report data which, in many cases, is considered a limitation. However, self-report data may be the only method that can be used to obtain some information. This program of research has a number of implications for countermeasures in both the learner licence phase and the provisional licence phase. During the learner phase, licensing authorities need to carefully consider the number of hours that they mandate learner drivers must complete before they obtain their provisional driving licence. If they mandate an insufficient number of hours, there may be inadvertent negative effects as a result of setting too low a limit. This research suggests that logbooks may be a useful tool for learners and their supervisors in recording and structuring their supervised practice. However, it would appear that the usage rates for logbooks will remain low if they remain voluntary. One strategy for achieving larger amounts of supervised practice is for learner drivers and their supervisors to make supervised practice part of their everyday activities. As well as assisting the learner driver to accumulate the required number of hours of supervised practice, it would ensure that they gain experience in the types of environments that they will probably encounter when driving unaccompanied in the future, such as to and from education or work commitments. There is also a need for policy processes to ensure that parents and professional driving instructors communicate effectively regarding the learner driver’s progress. This is required as most learners spend at least some time with a professional instructor despite receiving significant amounts of practice with a private supervisor. However, many supervisors did not discuss their learner’s progress with the driving instructor. During the provisional phase, there is a need to strengthen countermeasures to address the high crash risk of these drivers. Although many of these crashes are minor, most involve at least one other vehicle. Therefore, there are social and economic benefits to reducing these crashes. If the new, post-2007 graduated driver licensing systems do not significantly reduce crash risk, there may be a need to introduce further provisional licence restrictions such as separate night driving and peer passenger restrictions (as opposed to the hybrid version of these two restrictions operating in both Queensland and New South Wales). Provisional drivers appear to be more likely to obey some provisional licence laws, such as lower blood alcohol content limits, than others such as speed limits. Therefore, there may be a need to introduce countermeasures to encourage provisional drivers to comply with specific restrictions. When combined, these studies provided significant information regarding graduated driver licensing programs. This program of research has investigated graduated driver licensing utilising a cross-sectional and longitudinal design in order to develop our understanding of the experiences of novice drivers that progress through the system in order to help reduce crash risk once novice drivers commence driving by themselves.
Resumo:
Driving and using prescription medicines that have the potential to impair driving is an emerging research area. To date it is characterised by a limited (although growing) number of studies and methodological complexities that make generalisations about impairment due to medications difficult. Consistent evidence has been found for the impairing effects of hypnotics, sedative antidepressants and antihistamines, and narcotic analgesics, although it has been estimated that as many as nine medication classes have the potential to impair driving (Alvarez & del Rio, 2000; Walsh, de Gier, Christopherson, & Verstraete, 2004). There is also evidence for increased negative effects related to concomitant use of other medications and alcohol (Movig et al., 2004; Pringle, Ahern, Heller, Gold, & Brown, 2005). Statistics on the high levels of Australian prescription medication use suggest that consumer awareness of driving impairment due to medicines should be examined. One web-based study has found a low level of awareness, knowledge and risk perceptions among Australian drivers about the impairing effects of various medications on driving (Mallick, Johnston, Goren, & Kennedy, 2007). The lack of awareness and knowledge brings into question the effectiveness of the existing countermeasures. In Australia these consist of the use of ancillary warning labels administered under mandatory regulation and professional guidelines, advice to patients, and the use of Consumer Medicines Information (CMI) with medications that are known to cause impairment. The responsibility for the use of the warnings and related counsel to patients primarily lies with the pharmacist when dispensing relevant medication. A review by the Therapeutic Goods Administration (TGA) noted that in practice, advice to patients may not occur and that CMI is not always available (TGA, 2002). Researchers have also found that patients' recall of verbal counsel is very low (Houts, Bachrach, Witmer, Tringali, Bucher, & Localio, 1998). With healthcare observed as increasingly being provided in outpatient conditions (Davis et al., 2006; Vingilis & MacDonald, 2000), establishing the effectiveness of the warning labels as a countermeasure is especially important. There have been recent international developments in medication categorisation systems and associated medication warning labels. In 2005, France implemented a four-tier medication categorisation and warning system to improve patients' and health professionals' awareness and knowledge of related road safety issues (AFSSAPS, 2005). This warning system uses a pictogram and indicates the level of potential impairment in relation to driving performance through the use of colour and advice on the recommended behaviour to adopt towards driving. The comparable Australian system does not indicate the severity level of potential effects, and does not provide specific guidelines on the attitude or actions that the individual should adopt towards driving. It is reliant upon the patient to be vigilant in self-monitoring effects, to understand the potential ways in which they may be affected and how serious these effects may be, and to adopt the appropriate protective actions. This thesis investigates the responses of a sample of Australian hospital outpatients who receive appropriate labelling and counselling advice about potential driving impairment due to prescribed medicines. It aims to provide baseline data on the understanding and use of relevant medications by a Queensland public hospital outpatient sample recruited through the hospital pharmacy. It includes an exploration and comparison of the effect of the Australian and French medication warning systems on medication user knowledge, attitudes, beliefs and behaviour, and explores whether there are areas in which the Australian system may be improved by including any beneficial elements of the French system. A total of 358 outpatients were surveyed, and a follow-up telephone survey was conducted with a subgroup of consenting participants who were taking at least one medication that required an ancillary warning label about driving impairment. A complementary study of 75 French hospital outpatients was also conducted to further investigate the performance of the warnings. Not surprisingly, medication use among the Australian outpatient sample was high. The ancillary warning labels required to appear on medications that can impair driving were prevalent. A subgroup of participants was identified as being potentially at-risk of driving impaired, based on their reported recent use of medications requiring an ancillary warning label and level of driving activity. The sample reported previous behaviour and held future intentions that were consistent with warning label advice and health protective action. Participants did not express a particular need for being advised by a health professional regarding fitness to drive in relation to their medication. However, it was also apparent from the analysis that the participants would be significantly more likely to follow advice from a doctor than a pharmacist. High levels of knowledge in terms of general principles about effects of alcohol, illicit drugs and combinations of substances, and related health and crash risks were revealed. This may reflect a sample specific effect. Emphasis is placed in the professional guidelines for hospital pharmacists that make it essential that advisory labels are applied to medicines where applicable and that warning advice is given to all patients on medication which may affect driving (SHPA, 2006, p. 221). The research program applied selected theoretical constructs from Schwarzer's (1992) Health Action Process Approach, which has extended constructs from existing health theories such as the Theory of Planned Behavior (Ajzen, 1991) to better account for the intention-behaviour gap often observed when predicting behaviour. This was undertaken to explore the utility of the constructs in understanding and predicting compliance intentions and behaviour with the mandatory medication warning about driving impairment. This investigation revealed that the theoretical constructs related to intention and planning to avoid driving if an effect from the medication was noticed were useful. Not all the theoretical model constructs that had been demonstrated to be significant predictors in previous research on different health behaviours were significant in the present analyses. Positive outcome expectancies from avoiding driving were found to be important influences on forming the intention to avoid driving if an effect due to medication was noticed. In turn, intention was found to be a significant predictor of planning. Other selected theoretical constructs failed to predict compliance with the Australian warning label advice. It is possible that the limited predictive power of a number of constructs including risk perceptions is due to the small sample size obtained at follow up on which the evaluation is based. Alternately, it is possible that the theoretical constructs failed to sufficiently account for issues of particular relevance to the driving situation. The responses of the Australian hospital outpatient sample towards the Australian and French medication warning labels, which differed according to visual characteristics and warning message, were examined. In addition, a complementary study with a sample of French hospital outpatients was undertaken in order to allow general comparisons concerning the performance of the warnings. While a large amount of research exists concerning warning effectiveness, there is little research that has specifically investigated medication warnings relating to driving impairment. General established principles concerning factors that have been demonstrated to enhance warning noticeability and behavioural compliance have been extrapolated and investigated in the present study. The extent to which there is a need for education and improved health messages on this issue was a core issue of investigation in this thesis. Among the Australian sample, the size of the warning label and text, and red colour were the most visually important characteristics. The pictogram used in the French labels was also rated highly, and was salient for a large proportion of the sample. According to the study of French hospital outpatients, the pictogram was perceived to be the most important visual characteristic. Overall, the findings suggest that the Australian approach of using a combination of visual characteristics was important for the majority of the sample but that the use of a pictogram could enhance effects. A high rate of warning recall was found overall and a further important finding was that higher warning label recall was associated with increased number of medication classes taken. These results suggest that increased vigilance and care are associated with the number of medications taken and the associated repetition of the warning message. Significantly higher levels of risk perception were found for the French Level 3 (highest severity) label compared with the comparable mandatory Australian ancillary Label 1 warning. Participants' intentions related to the warning labels indicated that they would be more cautious while taking potentially impairing medication displaying the French Level 3 label compared with the Australian Label 1. These are potentially important findings for the Australian context regarding the current driving impairment warnings about displayed on medication. The findings raise other important implications for the Australian labelling context. An underlying factor may be the differences in the wording of the warning messages that appear on the Australian and French labels. The French label explicitly states "do not drive" while the Australian label states "if affected, do not drive", and the difference in responses may reflect that less severity is perceived where the situation involves the consumer's self-assessment of their impairment. The differences in the assignment of responsibility by the Australian (the consumer assesses and decides) and French (the doctor assesses and decides) approaches for the decision to drive while taking medication raises the core question of who is most able to assess driving impairment due to medication: the consumer, or the health professional? There are pros and cons related to knowledge, expertise and practicalities with either option. However, if the safety of the consumer is the primary aim, then the trend towards stronger risk perceptions and more consistent and cautious behavioural intentions in relation to the French label suggests that this approach may be more beneficial for consumer safety. The observations from the follow-up survey, although based on a small sample size and descriptive in nature, revealed that just over half of the sample recalled seeing a warning label about driving impairment on at least one of their medications. The majority of these respondents reported compliance with the warning advice. However, the results indicated variation in responses concerning alcohol intake and modifying the dose of medication or driving habits so that they could continue to drive, which suggests that the warning advice may not be having the desired impact. The findings of this research have implications for current countermeasures in this area. These have included enhancing the role that prescribing doctors have in providing warnings and advice to patients about the impact that their medication can have on driving, increasing consumer perceptions of the authority of pharmacists on this issue, and the reinforcement of the warning message. More broadly, it is suggested that there would be benefit in a wider dissemination of research-based information on increased crash risk and systematic monitoring and publicity about the representation of medications in crashes resulting in injuries and fatalities. Suggestions for future research concern the continued investigation of the effects of medications and interactions with existing medical conditions and other substances on driving skills, effects of variations in warning label design, individual behaviours and characteristics (particularly among those groups who are dependent upon prescription medication) and validation of consumer self-assessment of impairment.
Resumo:
In an aging population, healthcare providers should understand the foodservice preferences of the elderly to reduce the risk of malnutrition through adequate nutrition. Conflicting reports exist for elderly patient satisfaction regarding foodservice.1 This study aimed to investigate the relationship between age and foodservice satisfaction within the acute care hospital setting. Patient satisfaction was assessed using the Acute Care Hospital Foodservice Patient Satisfaction Questionnaire with data collected over three years (2008–2010, n = 785) at The Wesley Hospital, Brisbane. Age was grouped into three categories; <50, 51–70 and >70 years. ANOVA was used to measure age related differences in patients’ overall foodservice satisfaction, four foodservice dimensions and two independent statements (meal size and hot food temperature). Results showed that older patients were more satisfied than younger patients and indicated increasing satisfaction with increasing age regarding food quality (F2,767 = 15.787, p < 0.001), staff/service issues (F2,768 = 12.243, p < 0.001), physical environment (F2,765 = 5.454, p = 0.004), meal size (F2,730 = 10.646, p < 0.001) and hot food temperature (F2,730 = 10.646, p < 0.001). While patients aged >70 years also reported greater satisfaction with meal service quality, those aged 51–70 years indicated the lowest (F2,762 = 9.988, p < 0.001). Overall patient satisfaction across all age groups was high (4.26–4.43/5) and a trend of increasing satisfaction with increasing age was evident (F2,752 = 2.900, p = 0.056). These findings suggest patients’ satisfaction with hospital foodservice increases with age and can assist foodservices to meet the varying generational expectations of their clients.
Resumo:
Purpose: This study used magnetic resonance spectroscopy (MRS) to examine metabolite abnormalities in the temporal and frontal lobe of patients with temporal lobe epilepsy (TLE) of differing severity. Methods: We investigated myoinositol in TLE by using short-echo MRS in 34 TLE patients [26 late onset (LO-TLE), eight hippocampal sclerosis (HS-TLE)], and 16 controls. Single-voxel short-echo (35 ms) MR spectra of temporal and frontal lobes were acquired at 1.5 T and analyzed by using LCModel. Results: The temporal lobe ipsilateral to seizure origin in HS-TLE, but not LO-TLE, had reduced N-acetylaspartate (NA) and elevated myoinositol (MI; HS-TLE NA, 7.8 ± 1.9 mM, control NA, 9.2 ± 1.3 mM; p < 0.05; HS-TLE MI, 6.1 ± 1.6 mM, control mI 4.9 ± 0.8 mM, p< 0.05). Frontal lobe MI was low in both patient groups (LO-TLE, 4.3 ± 0.8 mM; p < 0.05; HS-TLE, 3.6 ±.05 mM; p < 0.001; controls, 4.8 ± 0.5 mM). Ipsilateral frontal lobes had lower MI (3.8 ± 0.7 mM; p < 0.01) than contralateral frontal lobes (4.3 ± 0.8 mM; p < 0.05). Conclusions: MI changes may distinguish between the seizure focus, where MI is increased, and areas of seizure spread where MI is decreased.
Resumo:
BACKGROUND: The efficacy of nutritional support in the management of malnutrition in chronic obstructive pulmonary disease (COPD) is controversial. Previous meta-analyses, based on only cross-sectional analysis at the end of intervention trials, found no evidence of improved outcomes. OBJECTIVE: The objective was to conduct a meta-analysis of randomized controlled trials (RCTs) to clarify the efficacy of nutritional support in improving intake, anthropometric measures, and grip strength in stable COPD. DESIGN: Literature databases were searched to identify RCTs comparing nutritional support with controls in stable COPD. RESULTS: Thirteen RCTs (n = 439) of nutritional support [dietary advice (1 RCT), oral nutritional supplements (ONS; 11 RCTs), and enteral tube feeding (1 RCT)] with a control comparison were identified. An analysis of the changes induced by nutritional support and those obtained only at the end of the intervention showed significantly greater increases in mean total protein and energy intakes with nutritional support of 14.8 g and 236 kcal daily. Meta-analyses also showed greater mean (±SE) improvements in favor of nutritional support for body weight (1.94 ± 0.26 kg, P < 0.001; 11 studies, n = 308) and grip strength (5.3%, P < 0.050; 4 studies, n = 156), which was not shown by ANOVA at the end of the intervention, largely because of bias associated with baseline imbalance between groups. CONCLUSION: This systematic review and meta-analysis showed that nutritional support, mainly in the form of ONS, improves total intake, anthropometric measures, and grip strength in COPD. These results contrast with the results of previous analyses that were based on only cross-sectional measures at the end of intervention trials.
Resumo:
Aim: The aim of this pilot study is to describe the use of an Emergency Department (ED) at a large metropolitan teaching hospital by patients who speak English or other languages at home. Methods: All data were retrieved from the Emergency Department Information System (EDIS) of this tertiary teaching hospital in Brisbane. Patients were divided into two groups based on the language spoken at home: patients who speak English only at home (SEO) and patients who do not speak English only or speak other language at home (NSEO). Modes of arrival, length of ED stay and the proportion of hospital admission were compared among the two groups of patients by using SPSS V18 software. Results: A total of 69,494 patients visited this hospital ED in 2009 with 67,727 (97.5%) being in the SEO group and 1,281 (1.80%) in the NSEO group. The proportion of ambulance utilisation in arrival mode was significantly higher among SEO 23,172 (34.2%) than NSEO 397 (31.0%), p <0.05. The NSEO patients had longer length of stay in the ED (M = 337.21, SD = 285.9) compared to SEO patients (M= 290.9, SD = 266.8), with 46.3 minutes (95%CI 62.1, 30.5, p <0.001) difference. The admission to the hospital among NSEO was 402 (31.9%) higher than SEO 17,652 (26.6%), p <0.001. Conclusion: The lower utilisation rates of ambulance services, longer length of ED stay and higher hospital admission rates in NSEO patients compared to SEO patients are consistent with other international studies and may be due to the language barriers.
Resumo:
Exposure to ultrafine particles (diameter less than 100 nm) is an important topic in epidemiological and toxicological studies. This study used the average particle number size distribution data obtained from our measurement survey in major micro-environments, together with the people activity pattern data obtained from the Italian Human Activity Pattern Survey to estimate the tracheobronchial and alveolar dose of submicrometer particles for different population age groups in Italy. We developed a numerical methodology based on Monte Carlo method, in order to estimate the best combination from a probabilistic point of view. More than 106 different cases were analyzed according to a purpose built sub-routine and our results showed that the daily alveolar particle number and surface area deposited for all of the age groups considered was equal to 1.5 x 1011 particles and 2.5 x 1015 m2, respectively, varying slightly for males and females living in Northern or Southern Italy. In terms of tracheobronchial deposition, the corresponding values for daily particle number and surface area for all age groups was equal to 6.5 x 1010 particles and 9.9 x 1014 m2, respectively. Overall, the highest contributions were found to come from indoor cooking (female), working time (male) and transportation (i.e. traffic derived particles) (children).