332 resultados para Wyoming (Süd)


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Nutrition interventions in the form of both self-management education and individualised diet therapy are considered essential for the long-term management of type 2 diabetes mellitus (T2DM). The measurement of diet is essential to inform, support and evaluate nutrition interventions in the management of T2DM. Barriers inherent within health care settings and systems limit ongoing access to personnel and resources, while traditional prospective methods of assessing diet are burdensome for the individual and often result in changes in typical intake to facilitate recording. This thesis investigated the inclusion of information and communication technologies (ICT) to overcome limitations to current approaches in the nutritional management of T2DM, in particular the development, trial and evaluation of the Nutricam dietary assessment method (NuDAM) consisting of a mobile phone photo/voice application to assess nutrient intake in a free-living environment with older adults with T2DM. Study 1: Effectiveness of an automated telephone system in promoting change in dietary intake among adults with T2DM The effectiveness of an automated telephone system, Telephone-Linked Care (TLC) Diabetes, designed to deliver self-management education was evaluated in terms of promoting dietary change in adults with T2DM and sub-optimal glycaemic control. In this secondary data analysis independent of the larger randomised controlled trial, complete data was available for 95 adults (59 male; mean age(±SD)=56.8±8.1 years; mean(±SD)BMI=34.2±7.0kg/m2). The treatment effect showed a reduction in total fat of 1.4% and saturated fat of 0.9% energy intake, body weight of 0.7 kg and waist circumference of 2.0 cm. In addition, a significant increase in the nutrition self-efficacy score of 1.3 (p<0.05) was observed in the TLC group compared to the control group. The modest trends observed in this study indicate that the TLC Diabetes system does support the adoption of positive nutrition behaviours as a result of diabetes self-management education, however caution must be applied in the interpretation of results due to the inherent limitations of the dietary assessment method used. The decision to use a close-list FFQ with known bias may have influenced the accuracy of reporting dietary intake in this instance. This study provided an example of the methodological challenges experienced with measuring changes in absolute diet using a FFQ, and reaffirmed the need for novel prospective assessment methods capable of capturing natural variance in usual intakes. Study 2: The development and trial of NuDAM recording protocol The feasibility of the Nutricam mobile phone photo/voice dietary record was evaluated in 10 adults with T2DM (6 Male; age=64.7±3.8 years; BMI=33.9±7.0 kg/m2). Intake was recorded over a 3-day period using both Nutricam and a written estimated food record (EFR). Compared to the EFR, the Nutricam device was found to be acceptable among subjects, however, energy intake was under-recorded using Nutricam (-0.6±0.8 MJ/day; p<0.05). Beverages and snacks were the items most frequently not recorded using Nutricam; however forgotten meals contributed to the greatest difference in energy intake between records. In addition, the quality of dietary data recorded using Nutricam was unacceptable for just under one-third of entries. It was concluded that an additional mechanism was necessary to complement dietary information collected via Nutricam. Modifications to the method were made to allow for clarification of Nutricam entries and probing forgotten foods during a brief phone call to the subject the following morning. The revised recording protocol was evaluated in Study 4. Study 3: The development and trial of the NuDAM analysis protocol Part A explored the effect of the type of portion size estimation aid (PSEA) on the error associated with quantifying four portions of 15 single foods items contained in photographs. Seventeen dietetic students (1 male; age=24.7±9.1 years; BMI=21.1±1.9 kg/m2) estimated all food portions on two occasions: without aids and with aids (food models or reference food photographs). Overall, the use of a PSEA significantly reduced mean (±SD) group error between estimates compared to no aid (-2.5±11.5% vs. 19.0±28.8%; p<0.05). The type of PSEA (i.e. food models vs. reference food photograph) did not have a notable effect on the group estimation error (-6.7±14.9% vs. 1.4±5.9%, respectively; p=0.321). This exploratory study provided evidence that the use of aids in general, rather than the type, was more effective in reducing estimation error. Findings guided the development of the Dietary Estimation and Assessment Tool (DEAT) for use in the analysis of the Nutricam dietary record. Part B evaluated the effect of the DEAT on the error associated with the quantification of two 3-day Nutricam dietary records in a sample of 29 dietetic students (2 males; age=23.3±5.1 years; BMI=20.6±1.9 kg/m2). Subjects were randomised into two groups: Group A and Group B. For Record 1, the use of the DEAT (Group A) resulted in a smaller error compared to estimations made without the tool (Group B) (17.7±15.8%/day vs. 34.0±22.6%/day, p=0.331; respectively). In comparison, all subjects used the DEAT to estimate Record 2, with resultant error similar between Group A and B (21.2±19.2%/day vs. 25.8±13.6%/day; p=0.377 respectively). In general, the moderate estimation error associated with quantifying food items did not translate into clinically significant differences in the nutrient profile of the Nutricam dietary records, only amorphous foods were notably over-estimated in energy content without the use of the DEAT (57kJ/day vs. 274kJ/day; p<0.001). A large proportion (89.6%) of the group found the DEAT helpful when quantifying food items contained in the Nutricam dietary records. The use of the DEAT reduced quantification error, minimising any potential effect on the estimation of energy and macronutrient intake. Study 4: Evaluation of the NuDAM The accuracy and inter-rater reliability of the NuDAM to assess energy and macronutrient intake was evaluated in a sample of 10 adults (6 males; age=61.2±6.9 years; BMI=31.0±4.5 kg/m2). Intake recorded using both the NuDAM and a weighed food record (WFR) was coded by three dietitians and compared with an objective measure of total energy expenditure (TEE) obtained using the doubly labelled water technique. At the group level, energy intake (EI) was under-reported to a similar extent using both methods, with the ratio of EI:TEE was 0.76±0.20 for the NuDAM and 0.76±0.17 for the WFR. At the individual level, four subjects reported implausible levels of energy intake using the WFR method, compared to three using the NuDAM. Overall, moderate to high correlation coefficients (r=0.57-0.85) were found across energy and macronutrients except fat (r=0.24) between the two dietary measures. High agreement was observed between dietitians for estimates of energy and macronutrient derived for both the NuDAM (ICC=0.77-0.99; p<0.001) and WFR (ICC=0.82-0.99; p<0.001). All subjects preferred using the NuDAM over the WFR to record intake and were willing to use the novel method again over longer recording periods. This research program explored two novel approaches which utilised distinct technologies to aid in the nutritional management of adults with T2DM. In particular, this thesis makes a significant contribution to the evidence base surrounding the use of PhRs through the development, trial and evaluation of a novel mobile phone photo/voice dietary record. The NuDAM is an extremely promising advancement in the nutritional management of individuals with diabetes and other chronic conditions. Future applications lie in integrating the NuDAM with other technologies to facilitate practice across the remaining stages of the nutrition care process.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Newly licensed drivers on a provisional or intermediate licence have the highest crash risk when compared with any other group of drivers. In comparison, learner drivers have the lowest crash risk. Graduated driver licensing is one countermeasure that has been demonstrated to effectively reduce the crashes of novice drivers. This thesis examined the graduated driver licensing systems in two Australian states in order to better understand the behaviour of learner drivers, provisional drivers and the supervisors of learner drivers. By doing this, the thesis investigated the personal, social and environmental influences on novice driver behaviour as well as providing effective baseline data against which to measure subsequent changes to the licensing systems. In the first study, conducted prior to the changes to the graduated driver licensing system introduced in mid-2007, drivers who had recently obtained their provisional licence in Queensland and New South Wales were interviewed by telephone regarding their experiences while driving on their learner licence. Of the 687 eligible people approached to participate at driver licensing centres, 392 completed the study representing a response rate of 57.1 per cent. At the time the data was collected, New South Wales represented a more extensive graduated driver licensing system when compared with Queensland. The results suggested that requiring learners to complete a mandated number of hours of supervised practice impacts on the amount of hours that learners report completing. While most learners from New South Wales reported meeting the requirement to complete 50 hours of practice, it appears that many stopped practising soon after this goal was achieved. In contrast, learners from Queensland, who were not required to complete a specific number of hours at the time of the survey, tended to fall into three groups. The first group appeared to complete the minimum number of hours required to pass the test (less than 26 hours), the second group completed 26 to 50 hours of supervised practice while the third group completed significantly more practice than the first two groups (over 100 hours of supervised practice). Learner drivers in both states reported generally complying with the road laws and were unlikely to report that they had been caught breaking the road rules. They also indicated that they planned to obey the road laws once they obtained their provisional licence. However, they were less likely to intend to comply with recommended actions to reduce crash risk such as limiting their driving at night. This study also identified that there were relatively low levels of unaccompanied driving (approximately 15 per cent of the sample), very few driving offences committed (five per cent of the sample) and that learner drivers tended to use a mix of private and professional supervisors (although the majority of practice is undertaken with private supervisors). Consistent with the international literature, this study identified that very few learner drivers had experienced a crash (six per cent) while on their learner licence. The second study was also conducted prior to changes to the graduated driver licensing system and involved follow up interviews with the participants of the first study after they had approximately 21 months driving experience on their provisional licence. Of the 392 participants that completed the first study, 233 participants completed the second interview (representing a response rate of 59.4 per cent). As with the first study, at the time the data was collected, New South Wales had a more extensive graduated driver licensing system than Queensland. For instance, novice drivers from New South Wales were required to progress through two provisional licence phases (P1 and P2) while there was only one provisional licence phase in Queensland. Among the participants in this second study, almost all provisional drivers (97.9 per cent) owned or had access to a vehicle for regular driving. They reported that they were unlikely to break road rules, such as driving after a couple of drinks, but were also unlikely to comply with recommended actions, such as limiting their driving at night. When their provisional driving behaviour was compared to the stated intentions from the first study, the results suggested that their intentions were not a strong predictor of their subsequent behaviour. Their perception of risk associated with driving declined from when they first obtained their learner licence to when they had acquired provisional driving experience. Just over 25 per cent of participants in study two reported that they had been caught committing driving offences while on their provisional licence. Nearly one-third of participants had crashed while driving on a provisional licence, although few of these crashes resulted in injuries or hospitalisations. To complement the first two studies, the third study examined the experiences of supervisors of learner drivers, as well as their perceptions of their learner’s experiences. This study was undertaken after the introduction of the new graduated driver licensing systems in Queensland and New South Wales in mid- 2007, providing insights into the impacts of these changes from the perspective of supervisors. The third study involved an internet survey of 552 supervisors of learner drivers. Within the sample, approximately 50 per cent of participants supervised their own child. Other supervisors of the learner drivers included other parents or stepparents, professional driving instructors and siblings. For two-thirds of the sample, this was the first learner driver that they had supervised. Participants had provided an average of 54.82 hours (sd = 67.19) of supervision. Seventy-three per cent of participants indicated that their learners’ logbooks were accurate or very accurate in most cases, although parents were more likely than non-parents to report that their learners’ logbook was accurate (F (1,546) = 7.74, p = .006). There was no difference between parents and non-parents regarding whether they believed the log book system was effective (F (1,546) = .01, p = .913). The majority of the sample reported that their learner driver had had some professional driving lessons. Notwithstanding this, a significant proportion (72.5 per cent) believed that parents should be either very involved or involved in teaching their child to drive, with parents being more likely than non-parents to hold this belief. In the post mid-2007 graduated driver licensing system, Queensland learner drivers are able to record three hours of supervised practice in their log book for every hour that is completed with a professional driving instructor, up to a total of ten hours. Despite this, there was no difference identified between Queensland and New South Wales participants regarding the amount of time that they reported their learners spent with professional driving instructors (X2(1) = 2.56, p = .110). Supervisors from New South Wales were more likely to ensure that their learner driver complied with the road laws. Additionally, with the exception of drug driving laws, New South Wales supervisors believed it was more important to teach safety-related behaviours such as remaining within the speed limit, car control and hazard perception than those from Queensland. This may be indicative of more intensive road safety educational efforts in New South Wales or the longer time that graduated driver licensing has operated in that jurisdiction. However, other factors may have contributed to these findings and further research is required to explore the issue. In addition, supervisors reported that their learner driver was involved in very few crashes (3.4 per cent) and offences (2.7 per cent). This relatively low reported crash rate is similar to that identified in the first study. Most of the graduated driver licensing research to date has been applied in nature and lacked a strong theoretical foundation. These studies used Akers’ social learning theory to explore the self-reported behaviour of novice drivers and their supervisors. This theory was selected as it has previously been found to provide a relatively comprehensive framework for explaining a range of driver behaviours including novice driver behaviour. Sensation seeking was also used in the first two studies to complement the non-social rewards component of Akers’ social learning theory. This program of research identified that both Akers’ social learning theory and sensation seeking were useful in predicting the behaviour of learner and provisional drivers over and above socio-demographic factors. Within the first study, Akers’ social learning theory accounted for an additional 22 per cent of the variance in learner driver compliance with the law, over and above a range of socio-demographic factors such as age, gender and income. The two constructs within Akers’ theory which were significant predictors of learner driver compliance were the behavioural dimension of differential association relating to friends, and anticipated rewards. Sensation seeking predicted an additional six per cent of the variance in learner driver compliance with the law. When considering a learner driver’s intention to comply with the law while driving on a provisional licence, Akers’ social learning theory accounted for an additional 10 per cent of the variance above socio-demographic factors with anticipated rewards being a significant predictor. Sensation seeking predicted an additional four per cent of the variance. The results suggest that the more rewards individuals anticipate for complying with the law, the more likely they are to obey the road rules. Further research is needed to identify which specific rewards are most likely to encourage novice drivers’ compliance with the law. In the second study, Akers’ social learning theory predicted an additional 40 per cent of the variance in self-reported compliance with road rules over and above socio-demographic factors while sensation seeking accounted for an additional five per cent of the variance. A number of Aker’s social learning theory constructs significantly predicted provisional driver compliance with the law, including the behavioural dimension of differential association for friends, the normative dimension of differential association, personal attitudes and anticipated punishments. The consistent prediction of additional variance by sensation seeking over and above the variables within Akers’ social learning theory in both studies one and two suggests that sensation seeking is not fully captured within the non social rewards dimension of Akers’ social learning theory, at least for novice drivers. It appears that novice drivers are strongly influenced by the desire to engage in new and intense experiences. While socio-demographic factors and the perception of risk associated with driving had an important role in predicting the behaviour of the supervisors of learner drivers, Akers’ social learning theory provided further levels of prediction over and above these factors. The Akers’ social learning theory variables predicted an additional 14 per cent of the variance in the extent to which supervisors ensured that their learners complied with the law and an additional eight per cent of the variance in the supervisors’ provision of a range of practice experiences. The normative dimension of differential association, personal attitudes towards the use of professional driving instructors and anticipated rewards were significant predictors for supervisors ensuring that their learner complied with the road laws, while the normative dimension was important for range of practice. This suggests that supervisors who engage with other supervisors who ensure their learner complies with the road laws and provide a range of practice to their own learners are more likely to also engage in these behaviours. Within this program of research, there were several limitations including the method of recruitment of participants within the first study, the lower participation rate in the second study, an inability to calculate a response rate for study three and the use of self-report data for all three studies. Within the first study, participants were only recruited from larger driver licensing centres to ensure that there was a sufficient throughput of drivers to approach. This may have biased the results due to the possible differences in learners that obtain their licences in locations with smaller licensing centres. Only 59.4 per cent of the sample in the first study completed the second study. This may be a limitation if there was a common reason why those not participating were unable to complete the interview leading to a systematic impact on the results. The third study used a combination of a convenience and snowball sampling which meant that it was not possible to calculate a response rate. All three studies used self-report data which, in many cases, is considered a limitation. However, self-report data may be the only method that can be used to obtain some information. This program of research has a number of implications for countermeasures in both the learner licence phase and the provisional licence phase. During the learner phase, licensing authorities need to carefully consider the number of hours that they mandate learner drivers must complete before they obtain their provisional driving licence. If they mandate an insufficient number of hours, there may be inadvertent negative effects as a result of setting too low a limit. This research suggests that logbooks may be a useful tool for learners and their supervisors in recording and structuring their supervised practice. However, it would appear that the usage rates for logbooks will remain low if they remain voluntary. One strategy for achieving larger amounts of supervised practice is for learner drivers and their supervisors to make supervised practice part of their everyday activities. As well as assisting the learner driver to accumulate the required number of hours of supervised practice, it would ensure that they gain experience in the types of environments that they will probably encounter when driving unaccompanied in the future, such as to and from education or work commitments. There is also a need for policy processes to ensure that parents and professional driving instructors communicate effectively regarding the learner driver’s progress. This is required as most learners spend at least some time with a professional instructor despite receiving significant amounts of practice with a private supervisor. However, many supervisors did not discuss their learner’s progress with the driving instructor. During the provisional phase, there is a need to strengthen countermeasures to address the high crash risk of these drivers. Although many of these crashes are minor, most involve at least one other vehicle. Therefore, there are social and economic benefits to reducing these crashes. If the new, post-2007 graduated driver licensing systems do not significantly reduce crash risk, there may be a need to introduce further provisional licence restrictions such as separate night driving and peer passenger restrictions (as opposed to the hybrid version of these two restrictions operating in both Queensland and New South Wales). Provisional drivers appear to be more likely to obey some provisional licence laws, such as lower blood alcohol content limits, than others such as speed limits. Therefore, there may be a need to introduce countermeasures to encourage provisional drivers to comply with specific restrictions. When combined, these studies provided significant information regarding graduated driver licensing programs. This program of research has investigated graduated driver licensing utilising a cross-sectional and longitudinal design in order to develop our understanding of the experiences of novice drivers that progress through the system in order to help reduce crash risk once novice drivers commence driving by themselves.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Deprivation assessed using the index of multiple deprivation (IMD) has been shown to be an independent risk factor for 1-year mortality in outpatients with chronic obstructive pulmonary disease; COPD (Collins et al, 2010). IMD combines a number of economic and social issues (eg, health, education, employment) into one overall deprivation score, the higher the score the higher an individual's deprivation. Whilst malnutrition in COPD has been linked to increased healthcare use it is not clear if deprivation is also independently associated. This study aimed to investigate the influence of deprivation on 1-year healthcare utilisation in outpatients with COPD. IMD was established in 424 outpatients with COPD according to the geographical location for each patient's address (postcode) and related to their healthcare use in the year post-date screened (Nobel et al, 2008). Patients were routinely screened in outpatient clinics for malnutrition using the ‘Malnutrition Universal Screening Tool’, ‘MUST’ (Elia 2003); mean age 73 (SD 9.9) years; body mass index 25.8 (SD 6.3) kg/m2 with healthcare use collected 1 year from screening (Abstract P147 Table 1). Deprivation assessed using IMD (mean 15.9; SD 11.1) was found to be a significant predictor for the frequency and duration of emergency hospital admissions as well as the duration of elective hospital admission. Deprivation was also linked to reduced secondary care outpatient appointment attendance but not an increase in failure to attend and deprivation was not associated with increased disease severity, as classified by the GOLD criteria (p=0.580). COPD outpatients residing in more deprived areas experience increased hospitalisation rates but decreased outpatient appointment attendance. The underlying reason behind this disparity in healthcare use requires further investigation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Introduction:  Smoking status in outpatients with chronic obstructive pulmonary disease (COPD) has been associated with a low body mass index (BMI) and reduced mid-arm muscle circumference (Cochrane & Afolabi, 2004). Individuals with COPD identified as malnourished have also been found to be twice as likely to die within 1 year compared to non-malnourished patients (Collins et al., 2010). Although malnutrition is both preventable and treatable, it is not clear what influence current smoking status, another modifiable risk factor, has on malnutrition risk. The current study aimed to establish the influence of smoking status on malnutrition risk and 1-year mortality in outpatients with COPD. Methods:  A prospective nutritional screening survey was carried out between July 2008 and May 2009 at a large teaching hospital (Southampton General Hospital) and a smaller community hospital within Hampshire (Lymington New Forest Hospital). In total, 424 outpatients with a diagnosis of COPD were routinely screened using the ‘Malnutrition Universal Screening Tool’, ‘MUST’ (Elia, 2003); 222 males, 202 females; mean (SD) age 73 (9.9) years; mean (SD) BMI 25.9 (6.4) kg m−2. Smoking status on the date of screening was obtained for 401 of the outpatients. Severity of COPD was assessed using the GOLD criteria, and social deprivation determined using the Index of Multiple Deprivation (Nobel et al., 2008). Results:  The overall prevalence of malnutrition (medium + high risk) was 22%, with 32% of current smokers at risk (who accounted for 19% of the total COPD population). In comparison, 19% of nonsmokers and ex-smokers were likely to be malnourished [odds ratio, 1.965; 95% confidence interval (CI), 1.133–3.394; P = 0.015]. Smoking status remained an independent risk factor for malnutrition even after adjustment for age, social deprivation and disease-severity (odds ratio, 2.048; 95% CI, 1.085–3.866; P = 0.027) using binary logistic regression. After adjusting for age, disease severity, social deprivation, smoking status, malnutrition remained a significant predictor of 1-year mortality [odds ratio (medium + high risk versus low risk), 2.161; 95% CI, 1.021–4.573; P = 0.044], whereas smoking status did not (odds ratio for smokers versus ex-smokers + nonsmokers was 1.968; 95% CI, 0.788–4.913; P = 0.147). Discussion:  This study highlights the potential importance of combined nutritional support and smoking cessation in order to treat malnutrition. The close association between smoking status and malnutrition risk in COPD suggests that smoking is an important consideration in the nutritional management of malnourished COPD outpatients. Conclusions:  Smoking status in COPD outpatients is a significant independent risk factor for malnutrition and a weaker (nonsignificant) predictor of 1-year mortality. Malnutrition significantly predicted 1 year mortality. References:  Cochrane, W.J. & Afolabi, O.A. (2004) Investigation into the nutritional status, dietary intake and smoking habits of patients with chronic obstructive pulmonary disease. J. Hum. Nutr. Diet.17, 3–11. Collins, P.F., Stratton, R.J., Kurukulaaratchym R., Warwick, H. Cawood, A.L. & Elia, M. (2010) ‘MUST’ predicts 1-year survival in outpatients with chronic obstructive pulmonary disease. Clin. Nutr.5, 17. Elia, M. (Ed) (2003) The ‘MUST’ Report. BAPEN. http://www.bapen.org.uk (accessed on March 30 2011). Nobel, M., McLennan, D., Wilkinson, K., Whitworth, A. & Barnes, H. (2008) The English Indices of Deprivation 2007. http://www.communities.gov.uk (accessed on March 30 2011).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Deprivation is linked to increased incidence in a number of chronic diseases but its relationship to chronic obstructive pulmonary disease (COPD) is uncertain despite suggestions that the socioeconomic gradient seen in COPD is as great, if not greater, than any other disease (Prescott and Vestbo).1 There is also a need to take into account the confounding effects of malnutrition which have been shown to be independently linked to increased mortality (Collins et al).2 The current study investigated the influence of social deprivation on 1-year survival rates in COPD outpatients, independently of malnutrition. 424 outpatients with COPD were routinely screened for malnutrition risk using the ‘Malnutrition Universal Screening Tool’; ‘MUST’ (Elia),3 between July and May 2009; 222 males and 202 females; mean age 73 (SD 9.9) years; body mass index 25.8 (SD 6.3) kg/m2. Each individual's deprivation was calculated using the index of multiple deprivation (IMD) which was established according to the geographical location of each patient's address (postcode). IMD includes a number of indicators covering economic, housing and social issues (eg, health, education and employment) into a single deprivation score (Nobel et al).4 The lower the IMD score, the lower an individual's deprivation. The IMD was assigned to each outpatient at the time of screening and related to1-year mortality from the date screened. Outpatients who died within 1-year of screening were significantly more likely to reside within a deprived postcode (IMD 19.7±SD 13.1 vs 15.4±SD 10.7; p=0.023, OR 1.03, 95% CI 1.00 to 1.06) than those that did not die. Deprivation remained a significant independent risk factor for 1-year mortality even when adjusted for malnutrition as well as age, gender and disease severity (binary logistic regression; p=0.008, OR 1.04, 95% CI 1.04 to 1.07). Deprivation was not associated with disease-severity (p=0.906) or body mass index, kg/m2 (p=0.921) using ANOVA. This is the first study to show that deprivation, assessed using IMD, is associated with increased 1-year mortality in outpatients with COPD independently of malnutrition, age and disease severity. Deprivation should be considered in the targeted management of these patients.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Aim: The aim of this pilot study is to describe the use of an Emergency Department (ED) at a large metropolitan teaching hospital by patients who speak English or other languages at home. Methods: All data were retrieved from the Emergency Department Information System (EDIS) of this tertiary teaching hospital in Brisbane. Patients were divided into two groups based on the language spoken at home: patients who speak English only at home (SEO) and patients who do not speak English only or speak other language at home (NSEO). Modes of arrival, length of ED stay and the proportion of hospital admission were compared among the two groups of patients by using SPSS V18 software. Results: A total of 69,494 patients visited this hospital ED in 2009 with 67,727 (97.5%) being in the SEO group and 1,281 (1.80%) in the NSEO group. The proportion of ambulance utilisation in arrival mode was significantly higher among SEO 23,172 (34.2%) than NSEO 397 (31.0%), p <0.05. The NSEO patients had longer length of stay in the ED (M = 337.21, SD = 285.9) compared to SEO patients (M= 290.9, SD = 266.8), with 46.3 minutes (95%CI 62.1, 30.5, p <0.001) difference. The admission to the hospital among NSEO was 402 (31.9%) higher than SEO 17,652 (26.6%), p <0.001. Conclusion: The lower utilisation rates of ambulance services, longer length of ED stay and higher hospital admission rates in NSEO patients compared to SEO patients are consistent with other international studies and may be due to the language barriers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: Malnutrition, and poor intake during hospitalisation, are common in older medical patients. Better understanding of patient-specific factors associated with poor intake may inform nutritional interventions. AIMS: To measure the proportion of older medical patients with inadequate nutritional intake, and identify patient-related factors associated with this outcome. METHODS: Prospective cohort study enrolling consecutive consenting medical inpatients aged 65 years or older. Primary outcome was energy intake less than resting energy expenditure estimated using weight-based equations. Energy intake was calculated for a single day using direct observation of plate waste. Explanatory variables included age, gender, number of co-morbidities, number of medications, diagnosis, usual residence, nutritional status, functional and cognitive impairment, depressive symptoms, poor appetite, poor dentition, and dysphagia. RESULTS: Of 134 participants (mean age 80 years, 51% female), only 41% met estimated resting energy requirements. Mean energy intake was 1220 kcal/day (SD 440), or 18.1 kcal/kg/day. Factors associated with inadequate energy intake in multivariate analysis were poor appetite, higher BMI, diagnosis of infection or cancer, delirium and need for assistance with feeding. CONCLUSIONS: Inadequate nutritional intake is common, and patient factors contributing to poor intake need to be considered in nutritional interventions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: The capacity to delay gratification has been shown to be a very important developmental task for children who are developing typically. There is evidence that children with Down syndrome have more difficulty with a delay of gratification task than typically developing children of the same mental age. This study focused on the strategies children with Down syndrome use while in a delay of gratification situation to ascertain if these contribute to the differences in delay times from those of typically developing children. Method: Thirty-two children with Down syndrome (15 females) and 50 typically developing children participated in the study. Children with Down syndrome had a mental age, as measured by the Stanford-Binet IV, between 36 and 66 months (M = 45.66). The typically developing children had a mean chronological age of 45.76 months. Children participated in a delay of gratification task where they were offered two or one small treats and asked which they preferred. They were then told that they could have the two treats if they waited for the researcher to return (an undisclosed time of 15 min). If they did not want to wait any longer they could call the researcher back but then they could have only one treat. Twenty-two of the children with Down syndrome and 43 of the typically developing children demonstrated understanding of the task and their data are included here. Sessions were videotaped for later analysis. Results: There were significant differences in the mean waiting times of the two groups. The mean of the waiting times for children with Down syndrome was 181.32 s (SD = 347.62) and was 440.21 s (SD = 377.59) for the typically developing children. Eighteen percent of the group with Down syndrome waited for the researcher to return in comparison to 35% of the typically developing group. Sixty-four percent of children with Down syndrome called the researcher back and the remainder (18%) violated. In the typically developing group 37% called the researcher back and 28% violated. The mean waiting time for the group of children with Down syndrome who called the researcher back was 24 s. Examination of strategy use in this group was therefore very limited. There appeared to be quite similar strategy use across the groups who waited the full 15 min. Conclusions: These results confirm the difficulty children with Down syndrome have in delaying gratification. Teaching strategies for waiting, using information drawn from the behaviours of children who are developing typically may be a useful undertaking. Examination of other contributors to delay ability (e.g., language skills) is also likely to be helpful in understanding the difficulties demonstrated in delaying gratification.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Visual adaptation regulates contrast sensitivity during dynamically changing light conditions (Crawford, 1947; Hecht, Haig & Chase, 1937). These adaptation dynamics are unknown under dim (mesopic) light levels when the rod (R) and long (L), medium (M) and short (S) wavelength cone photoreceptor classes contribute to vision via interactions in shared non-opponent Magnocellular (MC), chromatically opponent Parvocellular (PC) and Koniocellular (KC) visual pathways (Dacey, 2000). This study investigated the time-course of adaptation and post-receptoral pathways mediating receptor specific rod and cone interactions under mesopic illumination. A four-primary photostimulator (Pokorny, Smithson & Quinlan, 2004) was used to independently control the activity of the four photoreceptor classes and their post-receptoral visual athways in human observers. In the first experiment, the contrast sensitivity and time-course of visual adaptation under mesopic illumination were measured for receptoral (L, S, R) and post-receptoral (LMS, LMSR, L-M) stimuli. An incremental (Rapid-ON) sawtooth conditioning pulse biased detection to ON-cells within the visual pathways and sensitivity was assayed relative to pulse onset using a briefly presented incremental probe that did not alter adaptation. Cone.Cone interactions with luminance stimuli (L cone, LMS, LMSR) reduced sensitivity by 15% and the time course of recovery was 25± 5ms-1 (μ ± SEM). PC mediated (+L-M) chromatic stimuli sensitivity loss was less (8%) than for luminance and recovery was slower (μ = 2.95 ± 0.05 ms-1), with KC mediated (S cone) chromatic stimuli showing a high sensitivity loss (38%) and the slowest recovery time (1.6 ± 0.2 ms-1). Rod-Rod interactions increased sensitivity by 20% and the time course of recovery was 0.7 ± 0.2 ms-1 (μ ± SD). Compared to these interaction types, Rod-Cone interactions reduced sensitivity to a lesser degree (5%) and showed the fastest recovery (μ = 43 ± 7 ms-1). In the second experiment, rod contribution to the magnocellular, parvocellular and koniocellular post-receptoral pathways under mesopic illumination was determined as a function of incremental stimulus duration and waveform (rectangular; sawtooth) using a rod colour match procedure (Cao, Pokorny & Smith, 2005; Cao, Pokorny, Smith & Zele, 2008a). For a 30% rod increment, a cone match required a decrease in [L/(L+M)] and an increase in [L+M] and [S/(L+M)], giving a greenish-blue and brighter appearance for probe durations of 75 ms or longer. Probe durations less than 75 ms showed an increase in [L+M] and no change in chromaticity [L/(L+M) or S/(L+M)], uggesting mediation by the MC pathway only for short duration rod stimuli. s We advance previous studies by determining the time-course and nature of photoreceptor specific retinal interactions in the three post-receptoral pathways under mesopic illumination. In the first experiment, the time-course of adaptation for ON cell processing was determined, revealing opponent cell facilitation in chromatic PC and KC pathways. The Rod-Rod and Rod-Cone data identify previously unknown interaction types that act to maintain contrast sensitivity during dynamically changing light conditions and improve the speed of light adaptation under mesopic light levels. The second experiment determined the degree of rod contribution to the inferred post-eceptoral pathways as a function of the temporal properties of the rod signal. r The understanding of the mechanisms underlying interactions between photoreceptors under mesopic illumination has implications for the study of retinal disease. Visual function has been shown to be reduced in persons with age-related maculopathy (ARM) risk genotypes prior to clinical signs of the disease (Feigl, Cao, Morris & Zele, 2011) and disturbances in rod-mediated adaptation have been shown in early phases of ARM (Dimitrov, Guymer, Zele, Anderson & Vingrys, 2008; Feigl, Brown, Lovie-Kitchin & Swann, 2005). Also, the understanding of retinal networks controlling vision enables the development of international lighting standards to optimise visual performance nder dim light levels (e.g. work-place environments, transportation).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose Evidence suggests that improved empathy behaviours among healthcare professionals directly impacts on healthcare outcomes. However, the ‘nebulous’ properties of empathic behaviour often means that healthcare profession educators fail to incorporate the explicit teaching and assessment of empathy within the curriculum. This represents a potential mismatch between what is taught by universities and what is actually needed in the healthcare industry. The objective of this study was to assess the extent of empathy in paramedic students across seven Australian universities. Methods A cross-sectional study using a paper-based questionnaire employing a convenience sample of first, second, and third year undergraduate paramedic students. Student empathy levels were measured using a standardised self-reporting instrument: Jefferson Scale of Physician Empathy – Health Profession Students (JSPE-HPS). Findings A total of 783 students participated in the study of which 57% were females. The overall JSPE-HPS mean score was 106.74 (SD=14.8). Females had greater mean empathy scores than males 108.69 v 103.58 (p=0.042). First year undergraduate paramedic mean empathy levels were the lowest, 106.29 (SD=15.40) with second years the highest at 107.17 (SD=14.90). Value The overall findings provide a framework for educators to begin constructing guidelines focusing on the need to incorporate, promote and instil empathy into paramedic students in order to better prepare them for future out-of-hospital healthcare practice.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Abstract: Objectives Evidence suggests that improved empathy behaviours among healthcare professionals directly impacts on healthcare outcomes. However, the ‘nebulous’ properties of empathic behaviour often means that healthcare profession educators fail to incorporate the explicit teaching and assessment of empathy within the curriculum. The objective of this study was to assess the extent of empathy in paramedic students across seven Australian universities. Methods A cross-sectional study using a paper-based questionnaire employing a convenience sample of first, second, and third year undergraduate paramedic students. Student empathy levels were measured using the Medical Condition Regard Scale (MCRS). Results A total of 783 students participated in the study of which 57% were females. The medical conditions: intellectual disability, attempted suicide, and acute mental illness all produced mean scores above 50 suggesting good empathetic regard, while patients presenting with substance abuse produced the lowest mean score M= 41.57 (SD=12.29). There was a statistically significant difference between males (M= 49.79) and females (M=51.61) p=0.006, for patients with intellectual disability. Conclusions The findings from this study found that student reported poor empathetic regard for patients with substance abuse, while female students report higher levels of empathy than their male colleagues across each medical condition. The overall findings provide a framework for educators to begin constructing guidelines focusing on the need to incorporate, promote and instil empathy into paramedic students in order to better prepare them for future out-of-hospital healthcare practice.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Most studies of in vitro fertilisation (IVF) outcomes use cycle-based data and fail to account for women who use repeated IVF cycles. The objective of this study was to examine the association between the number of eggs collected (EC) and the percentage fertilised normally, and women’s self-reported medical, personal and social histories. This study involved a crosssectional survey of infertile women (aged 27-46 years) recruited from four privately-owned fertility clinics located in major cities of Australia. Regression modeling was used to estimate the mean EC and mean percentage of eggs fertilised normally: adjusted for age at EC. Appropriate statistical methods were used to take account of repeated IVF cycles by the same women. Among 121 participants who returned the survey and completed 286 IVF cycles, the mean age at EC was 35.2 years (SD 4.5). Women’s age at EC was strongly associated with the number of EC: <30 years, 11.7 EC; 30.0-< 35 years, 10.6 EC; 35.0-<40.0 years, 7.3 EC; 40.0+ years, 8.1 EC; p<.0001. Prolonged use of oral contraceptives was associated with lower numbers of EC: never used, 14.6 EC; 0-2 years, 11.7 EC; 3-5 years, 8.5 EC; 6þ years, 8.2 EC; p=.04. Polycystic ovary syndrome (PCOS) was associated with more EC: have PCOS, 11.5 EC; no, 8.3 EC; p=.01. Occupational exposures may be detrimental to normal fertilisation: professional roles, 58.8%; trade and service roles, 51.8%; manual and other roles, 63.3%; p=.02. In conclusion, women’s age remains the most significant characteristic associated with EC but not the percentage of eggs fertilised normally.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We compared the effects of an ice-slush beverage (ISB) and a cool liquid beverage (CLB) on cycling performance, changes in rectal temperature (T (re)) and stress responses in hot, humid conditions. Ten trained male cyclists/triathletes completed two exercise trials (75 min cycling at similar to 60% peak power output + 50 min seated recovery + 75% peak power output x 30 min performance trial) on separate occasions in 34A degrees C, 60% relative humidity. During the recovery phase before the performance trial, the athletes consumed either the ISB (mean +/- A SD -0.8 +/- A 0.1A degrees C) or the CLB (18.4 +/- A 0.5A degrees C). Performance time was not significantly different after consuming the ISB compared with the CLB (29.42 +/- A 2.07 min for ISB vs. 29.98 +/- A 3.07 min for CLB, P = 0.263). T (re) (37.0 +/- A 0.3A degrees C for ISB vs. 37.4 +/- A 0.2A degrees C for CLB, P = 0.001) and physiological strain index (0.2 +/- A 0.6 for ISB vs. 1.1 +/- A 0.9 for CLB, P = 0.009) were lower at the end of recovery and before the performance trial after ingestion of the ISB compared with the CLB. Mean thermal sensation was lower (P < 0.001) during recovery with the ISB compared with the CLB. Changes in plasma volume and the concentrations of blood variables (i.e., glucose, lactate, electrolytes, cortisol and catecholamines) were similar between the two trials. In conclusion, ingestion of ISB did not significantly alter exercise performance even though it significantly reduced pre-exercise T (re) compared with CLB. Irrespective of exercise performance outcomes, ingestion of ISB during recovery from exercise in hot humid environments is a practical and effective method for cooling athletes following exercise in hot environments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Aims: This study investigated the association between the basal (rest) insulin-signaling proteins, Akt, and the Akt substrate AS160, metabolic risk factors, inflammatory markers and aerobic fitness, in middle-aged women with varying numbers of metabolic risk factors for type 2 diabetes. Methods: Sixteen women (n = 16) aged 51.3+/-5.1 (mean +/-SD) years provided muscle biopsies and blood samples at rest. In addition, anthropometric characteristics and aerobic power were assessed and the number of metabolic risk factors for each participant was determined (IDF criteria). Results: The mean number of metabolic risk factors was 1.6+/-1.2. Total Akt was negatively correlated with IL-1 beta (r = -0.45, p = 0.046), IL-6 (r = -0.44, p = 0.052) and TNF-alpha (r = -0.51, p = 0.025). Phosphorylated AS160 was positively correlated with HDL (r = 0.58, p = 0.024) and aerobic fitness (r = 0.51, p = 0.047). Furthermore, a multiple regression analysis revealed that both HDL (t = 2.5, p = 0.032) and VO(2peak) (t = 2.4, p = 0.037) were better predictors for phosphorylated AS160 than TNF-alpha or IL-6 (p>0.05). Conclusions: Elevated inflammatory markers and increased metabolic risk factors may inhibit insulin-signaling protein phosphorylation in middle-aged women, thereby increasing insulin resistance under basal conditions. Furthermore, higher HDL and fitness levels are associated with an increased AS160 phosphorylation, which may in turn reduce insulin resistance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

PURPOSE: To examine the foveal retinal thickness (RT) and subfoveal choroidal thickness (ChT) between the fellow eyes of myopic anisometropes. METHODS: Twenty-two young (mean age 23 ± 5 years), healthy myopic anisometropes (≥ 1 D spherical equivalent [SEq] anisometropia) without amblyopia or strabismus were recruited. Spectral domain optical coherence tomography (SD-OCT) was used to capture images of the retina and choroid. Customised software was used to register, align and average multiple foveal OCT B-Scan images from each subject in order to enhance image quality. Two independent masked observers then manually determined the RT and ChT at the centre of the fovea from each SD-OCT image, which were then averaged. Axial length was measured using optical low coherence biometry during relaxed accommodation. RESULTS: The mean absolute SEq anisometropia was 1.74 ± 0.95 D and the mean interocular difference in axial length was 0.58 ± 0.41 mm. There was a strong correlation between SEq anisometropia and the interocular difference in axial length (r = 0.90, p < 0.001). Measures of RT and ChT were highly correlated between the two observers (r = 0.99 and 0.97 respectively) and in close agreement (mean inter-observer difference: RT 1.3 ± 2.2 µm, ChT 1.5 ± 13.7 µm). There was no significant difference in RT between the more (218 ± 18 µm) and less myopic eyes (215 ± 18 µm) (p > 0.05). However, the mean subfoveal ChT was significantly thinner in the more myopic eye (252 ± 46 µm) compared to the fellow, less myopic eye (286 ± 58 µm) (p < 0.001). There was a moderate correlation between the interocular difference in ChT and the interocular difference in axial length (r = -0.50, p < 0.01). CONCLUSIONS: Foveal RT was similar between the fellow eyes of myopic anisometropes; however, the subfoveal choroid was significantly thinner in the more myopic (longer) eye of our anisometropic cohort. The interocular difference in ChT correlated with the magnitude of axial anisometropia.