988 resultados para 90-25-PC1
Resumo:
We investigated the influence of rectal temperature on the immune system during and after exercise. Ten well-trained male cyclists completed exercise trials (90 min cycling at 60% VO(2max) + 16.1 - km time trial) on three separate occasions: once in 18 degrees C and twice in 32 degrees C. Twenty minutes after the trials in 32 degrees C, the cyclists sat for approximately 20 min in cold water (14 degrees C) on one occasion, whereas on another occasion they sat at room temperature. Rectal temperature increased significantly during cycling in both conditions, and was significantly higher after cycling in 32 degrees C than in 18 degrees C (P < 0.05). Leukocyte counts increased significantly during cycling but did not differ between the conditions. The concentrations of serum interleukin (IL)-6, IL-8 and IL-10, plasma catecholamines, granulocyte-colony stimulating factor, myeloperoxidase and calprotectin increased significantly following cycling in both conditions. The concentrations of serum IL-8 (25%), IL-10 (120%), IL-1 receptor antagonist (70%), tumour necrosis factor-alpha (17%), plasma myeloperoxidase (26%) and norepinephrine (130%) were significantly higher after cycling in 32 degrees C than in 18 degrees C. During recovery from exercise in 32 degrees C, rectal temperature was significantly lower in response to sitting in cold water than at room temperature. However, immune changes during 90 min of recovery did not differ significantly between sitting in cold water and at room temperature. The greater rise in rectal temperature during exercise in 32 degrees C increased the concentrations of serum IL-8, IL-10, IL-1ra, TNF-alpha and plasma myeloperoxidase, whereas the greater decline in rectal temperature during cold water immersion after exercise did not affect immune responses.
Resumo:
Creep and shrinkage behaviour of an ultra lightweight cement composite (ULCC) up to 450 days was evaluated in comparison with those of a normal weight aggregate concrete (NWAC) and a lightweight aggregate concrete (LWAC) with similar 28-day compressive strength. The ULCC is characterized by low density < 1500 kg/m3 and high compressive strength about 60 MPa. Autogenous shrinkage increased rapidly in the ULCC at early-age and almost 95% occurred prior to the start of creep test at 28 days. Hence, majority of shrinkage of the ULCC during creep test was drying shrinkage. Total shrinkage of the ULCC during the 450-day creep test was the lowest compared to the NWAC and LWAC. However, corresponding total creep in the ULCC was the highest with high proportion attributed to basic creep (≥ ~90%) and limited drying creep. The high creep of the ULCC is likely due to its low E-modulus. Specific creep of the ULCC was similar to that of the NWAC, but more than 80% higher than the LWAC. Creep coefficient of the ULCC was about 47% lower than that of the NWAC but about 18% higher than that of the LWAC. Among five creep models evaluated which tend to over-estimate the creep coefficient of the ULCC, EC2 model gives acceptable prediction within +25% deviations.
Resumo:
STUDY DESIGN: Reliability and case-control injury study. OBJECTIVES: 1) To determine if a novel device, designed to measure eccentric knee flexors strength via the Nordic hamstring exercise (NHE), displays acceptable test-retest reliability; 2) to determine normative values for eccentric knee flexors strength derived from the device in individuals without a history of hamstring strain injury (HSI) and; 3) to determine if the device could detect weakness in elite athletes with a previous history of unilateral HSI. BACKGROUND: HSIs and reinjuries are the most common cause of lost playing time in a number of sports. Eccentric knee flexors weakness is a major modifiable risk factor for future HSIs, however there is a lack of easily accessible equipment to assess this strength quality. METHODS: Thirty recreationally active males without a history of HSI completed NHEs on the device on 2 separate occasions. Intraclass correlation coefficients (ICCs), typical error (TE), typical error as a co-efficient of variation (%TE), and minimum detectable change at a 95% confidence interval (MDC95) were calculated. Normative strength data were determined using the most reliable measurement. An additional 20 elite athletes with a unilateral history of HSI within the previous 12 months performed NHEs on the device to determine if residual eccentric muscle weakness existed in the previously injured limb. RESULTS: The device displayed high to moderate reliability (ICC = 0.83 to 0.90; TE = 21.7 N to 27.5 N; %TE = 5.8 to 8.5; MDC95 = 76.2 to 60.1 N). Mean±SD normative eccentric flexors strength, based on the uninjured group, was 344.7 ± 61.1 N for the left and 361.2 ± 65.1 N for the right side. The previously injured limbs were 15% weaker than the contralateral uninjured limbs (mean difference = 50.3 N; 95% CI = 25.7 to 74.9N; P < .01), 15% weaker than the normative left limb data (mean difference = 50.0 N; 95% CI = 1.4 to 98.5 N; P = .04) and 18% weaker than the normative right limb data (mean difference = 66.5 N; 95% CI = 18.0 to 115.1 N; P < .01). CONCLUSIONS: The experimental device offers a reliable method to determine eccentric knee flexors strength and strength asymmetry and revealed residual weakness in previously injured elite athletes.
Resumo:
Human immunodeficiency virus (HIV) that leads to acquired immune deficiency syndrome (AIDs) reduces immune function, resulting in opportunistic infections and later death. Use of antiretroviral therapy (ART) increases chances of survival, however, with some concerns regarding fat re-distribution (lipodystrophy) which may encompass subcutaneous fat loss (lipoatrophy) and/or fat accumulation (lipohypertrophy), in the same individual. This problem has been linked to Antiretroviral drugs (ARVs), majorly, in the class of protease inhibitors (PIs), in addition to older age and being female. An additional concern is that the problem exists together with the metabolic syndrome, even when nutritional status/ body composition, and lipodystrophy/metabolic syndrome are unclear in Uganda where the use of ARVs is on the increase. In line with the literature, the overall aim of the study was to assess physical characteristics of HIV-infected patients using a comprehensive anthropometric protocol and to predict body composition based on these measurements and other standardised techniques. The other aim was to establish the existence of lipodystrophy, the metabolic syndrome, andassociated risk factors. Thus, three studies were conducted on 211 (88 ART-naïve) HIV-infected, 15-49 year-old women, using a cross-sectional approach, together with a qualitative study of secondary information on patient HIV and medication status. In addition, face-to-face interviews were used to extract information concerning morphological experiences and life style. The study revealed that participants were on average 34.1±7.65 years old, had lived 4.63±4.78 years with HIV infection and had spent 2.8±1.9 years receiving ARVs. Only 8.1% of participants were receiving PIs and 26% of those receiving ART had ever changed drug regimen, 15.5% of whom changed drugs due to lipodystrophy. Study 1 hypothesised that the mean nutritional status and predicted percent body fat values of study participants was within acceptable ranges; different for participants receiving ARVs and the HIV-infected ART-naïve participants and that percent body fat estimated by anthropometric measures (BMI and skinfold thickness) and the BIA technique was not different from that predicted by the deuterium oxide dilution technique. Using the Body Mass Index (BMI), 7.1% of patients were underweight (<18.5 kg/m2) and 46.4% were overweight/obese (≥25.0 kg/m2). Based on waist circumference (WC), approximately 40% of the cohort was characterized as centrally obese. Moreover, the deuterium dilution technique showed that there was no between-group difference in the total body water (TBW), fat mass (FM) and fat-free mass (FFM). However, the technique was the only approach to predict a between-group difference in percent body fat (p = .045), but, with a very small effect (0.021). Older age (β = 0.430, se = 0.089, p = .000), time spent receiving ARVs (β = 0.972, se = 0.089, p = .006), time with the infection (β = 0.551, se = 0.089, p = .000) and receiving ARVs (β = 2.940, se = 1.441, p = .043) were independently associated with percent body fat. Older age was the greatest single predictor of body fat. Furthermore, BMI gave better information than weight alone could; in that, mean percentage body fat per unit BMI (N = 192) was significantly higher in patients receiving treatment (1.11±0.31) vs. the exposed group (0.99±0.38, p = .025). For the assessment of obesity, percent fat measures did not greatly alter the accuracy of BMI as a measure for classifying individuals into the broad categories of underweight, normal and overweight. Briefly, Study 1 revealed that there were more overweight/obese participants than in the general Ugandan population, the problem was associated with ART status and that BMI broader classification categories were maintained when compared with the gold standard technique. Study 2 hypothesized that the presence of lipodystrophy in participants receiving ARVs was not different from that of HIV-infected ART-naïve participants. Results showed that 112 (53.1%) patients had experienced at least one morphological alteration including lipohypertrophy (7.6%), lipoatrophy (10.9%), and mixed alterations (34.6%). The majority of these subjects (90%) were receiving ARVs; in fact, all patients receiving PIs reported lipodystrophy. Period spent receiving ARVs (t209 = 6.739, p = .000), being on ART (χ2 = 94.482, p = .000), receiving PIs (Fisher’s exact χ2 = 113.591, p = .000), recent T4 count (CD4 counts) (t207 = 3.694, p = .000), time with HIV (t125 = 1.915, p = .045), as well as older age (t209 = 2.013, p = .045) were independently associated with lipodystrophy. Receiving ARVs was the greatest predictor of lipodystrophy (p = .000). In other analysis, aside from skinfolds at the subscapular (p = .004), there were no differences with the rest of the skinfold sites and the circumferences between participants with lipodystrophy and those without the problem. Similarly, there was no difference in Waist: Hip ratio (WHR) (p = .186) and Waist: Height ratio (WHtR) (p = .257) among participants with lipodystrophy and those without the problem. Further examination showed that none of the 4.1% patients receiving stavudine (d4T) did experience lipoatrophy. However, 17.9% of patients receiving EFV, a non-nucleoside reverse transcriptase inhibitor (NNRTI) had lipoatrophy. Study 2 findings showed that presence of lipodystrophy in participants receiving ARVs was in fact far higher than that of HIV-infected ART-naïve participants. A final hypothesis was that the prevalence of the metabolic syndrome in participants receiving ARVs was not different from that of HIV-infected ART-naïve participants. Moreover, data showed that many patients (69.2%) lived with at least one feature of the metabolic syndrome based on International Diabetic Federation (IDF, 2006) definition. However, there was no single anthropometric predictor of components of the syndrome, thus, the best anthropometric predictor varied as the component varied. The metabolic syndrome was diagnosed in 15.2% of the subjects, lower than commonly reported in this population, and was similar between the medicated and the exposed groups (χ 21 = 0.018, p = .893). Moreover, the syndrome was associated with older age (p = .031) and percent body fat (p = .012). In addition, participants with the syndrome were heavier according to BMI (p = .000), larger at the waist (p = .000) and abdomen (p = .000), and were at central obesity risk even when hip circumference (p = .000) and height (p = .000) were accounted for. In spite of those associations, results showed that the period with disease (p = .13), CD4 counts (p = .836), receiving ART (p = .442) or PIs (p = .678) were not associated with the metabolic syndrome. While the prevalence of the syndrome was highest amongst the older, larger and fatter participants, WC was the best predictor of the metabolic syndrome (p = .001). Another novel finding was that participants with the metabolic syndrome had greater arm muscle circumference (AMC) (p = .000) and arm muscle area (AMA) (p = .000), but the former was most influential. Accordingly, the easiest and cheapest indicator to assess risk in this study sample was WC should routine laboratory services not be feasible. In addition, the final study illustrated that the prevalence of the metabolic syndrome in participants receiving ARVs was not different from that of HIV-infected ART-naïve participants.
Resumo:
Saudi Arabian education is undergoing substantial reform in the context of a nation transitioning from a resource-rich economy to a knowledge economy. Gifted students are important human resources for such developing countries. However, there are some concerns emanating from the international literature that gifted students have been neglected in many schools due to teachers’ attitudes toward them. The literature shows that future teachers also hold similar negative attitudes, especially those in Special Education courses who, as practicing teachers, are often responsible for supporting the gifted education process. The purpose of this study was to explore whether these attitudes are held by future special education teachers in Saudi Arabia, and how the standard gifted education course, delivered as part of their program, impacts on their attitudes toward gifted students. The study was strongly influenced by the Theory of Reasoned Action (Ajzen, 1980, 2012) and the Theory of Personal Knowledge (Polanyi, 1966), which both suggest that attitudes are related to people’s (i.e. teachers’) beliefs. A mixed methods design was used to collect quantitative and qualitative data from a cohort of students enrolled in a teacher education program at a Saudi Arabian university. The program was designed for students majoring in special education. The quantitative component of the study involved an investigation of a cohort of future special education teachers taking a semester-long course in gifted education. The data were primarily sourced from a standard questionnaire instrument modified in the Arabic language, and supplemented with questions that probed the future teachers’ attitudes toward gifted children. The participants, 90 special education future teachers, were enrolled in an introductory course about gifted education. The questionnaire contained 34 items from the "Opinions about the Gifted and Their Education" (Gagné, 1991) questionnaire, utilising a five-point Likert scale. The quantitative data were analysed through the use of descriptive statistics, Spearman correlation Coefficients, Paired Samples t-test, and Multiple Linear Regression. The qualitative component focussed on eight participants enrolled in the gifted education course. The primary source of the qualitative data was informed by individual semi-structured interviews with each of these participants. The findings, based on both the quantitative and qualitative data, indicated that the majority of future special education teachers held, overall, slightly positive attitudes toward gifted students and their education. However, the participants were resistant to offering special services for the gifted within the regular classroom, even when a comparison was made on equity grounds with disabled students. While the participants held ambivalent attitudes toward ability grouping, their attitudes were positive toward grade acceleration. Further, the majority agreed that gifted students are likely to be rejected by their teachers. Despite such judgments, they considered the gifted to be a valuable resource for Saudi society. Differences within the cohort were found when two variables emerged as potential predictors of attitude: age, experience, and participants’ hometown. The younger (under 25 years old) future special education teachers, with no internship or school practice experience, held more positive attitudes toward the gifted students, with respect to their general needs, than did the older participants with previous school experiences. Additionally, participants from a rural region were more resistant toward gifted education than future teachers from urban areas. The findings also indicated that the attitudes of most of the participants were significantly improved, as a result of the course, toward ability grouping such as special classes and schools, but remained highly concerned about differentiation within regular classrooms with either elitism or time pressure. From the findings, it can be confirmed that a lectured-based course can serve as a starting point from which to focus future teachers’ attention on the varied needs of the gifted, and as a conduit for learning about special services for the gifted. However, by itself, the course appears to have minimal influence on attitudes toward differentiation. As a consequence, there is merit in its redevelopment, and the incorporation of more practical opportunities for future teachers to experience the teaching of the gifted.
Resumo:
After attending this presentation, attendees will gain awareness of: (1) the error and uncertainty associated with the application of the Suchey-Brooks (S-B) method of age estimation of the pubic symphysis to a contemporary Australian population; (2) the implications of sexual dimorphism and bilateral asymmetry of the pubic symphysis through preliminary geometric morphometric assessment; and (3) the value of three-dimensional (3D) autopsy data acquisition for creating forensic anthropological standards. This presentation will impact the forensic science community by demonstrating that, in the absence of demographically sound skeletal collections, post-mortem autopsy data provides an exciting platform for the construction of large contemporary ‘virtual osteological libraries’ for which forensic anthropological research can be conducted on Australian individuals. More specifically, this study assesses the applicability and accuracy of the S-B method to a contemporary adult population in Queensland, Australia, and using a geometric morphometric approach, provides an insight to the age-related degeneration of the pubic symphysis. Despite the prominent use of the Suchey-Brooks (1990) method of age estimation in forensic anthropological practice, it is subject to intrinsic limitations, with reports of differential inter-population error rates between geographical locations1-4. Australian forensic anthropology is constrained by a paucity of population specific standards due to a lack of repositories of documented skeletons. Consequently, in Australian casework proceedings, standards constructed from predominately American reference samples are applied to establish a biological profile. In the global era of terrorism and natural disasters, more specific population standards are required to improve the efficiency of medico-legal death investigation in Queensland. The sample comprises multi-slice computed tomography (MSCT) scans of the pubic symphysis (slice thickness: 0.5mm, overlap: 0.1mm) on 195 individuals of caucasian ethnicity aged 15-70 years. Volume rendering reconstruction of the symphyseal surface was conducted in Amira® (v.4.1) and quantitative analyses in Rapidform® XOS. The sample was divided into ten-year age sub-sets (eg. 15-24) with a final sub-set of 65-70 years. Error with respect to the method’s assigned means were analysed on the basis of bias (directionality of error), inaccuracy (magnitude of error) and percentage correct classification of left and right symphyseal surfaces. Morphometric variables including surface area, circumference, maximum height and width of the symphyseal surface and micro-architectural assessment of cortical and trabecular bone composition were quantified using novel automated engineering software capabilities. The results of this study demonstrated correct age classification utilizing the mean and standard deviations of each phase of the S-B method of 80.02% and 86.18% in Australian males and females, respectively. Application of the S-B method resulted in positive biases and mean inaccuracies of 7.24 (±6.56) years for individuals less than 55 years of age, compared to negative biases and mean inaccuracies of 5.89 (±3.90) years for individuals greater than 55 years of age. Statistically significant differences between chronological and S-B mean age were demonstrated in 83.33% and 50% of the six age subsets in males and females, respectively. Asymmetry of the pubic symphysis was a frequent phenomenon with 53.33% of the Queensland population exhibiting statistically significant (χ2 - p<0.01) differential phase classification of left and right surfaces of the same individual. Directionality was found in bilateral asymmetry, with the right symphyseal faces being slightly older on average and providing more accurate estimates using the S-B method5. Morphometric analysis verified these findings, with the left surface exhibiting significantly greater circumference and surface area than the right (p<0.05). Morphometric analysis demonstrated an increase in maximum height and width of the surface with age, with most significant changes (p<0.05) occurring between the 25-34 and 55-64 year age subsets. These differences may be attributed to hormonal components linked to menopause in females and a reduction in testosterone in males. Micro-architectural analysis demonstrated degradation of cortical composition with age, with differential bone resorption between the medial, ventral and dorsal surfaces of the pubic symphysis. This study recommends that the S-B method be applied with caution in medico-legal death investigations of unknown skeletal remains in Queensland. Age estimation will always be accompanied by error; therefore this study demonstrates the potential for quantitative morphometric modelling of age related changes of the pubic symphysis as a tool for methodological refinement, providing a rigor and robust assessment to remove the subjectivity associated with current pelvic aging methods.
Resumo:
The increasing prevalence of obesity in society has been associated with a number of atherogenic risk factors such as insulin resistance. Aerobic training is often recommended as a strategy to induce weight loss, with a greater impact of high-intensity levels on cardiovascular function and insulin sensitivity, and a greater impact of moderate-intensity levels on fat oxidation. Anaerobic high-intensity (supramaximal) interval training has been advocated to improve cardiovascular function, insulin sensitivity and fat oxidation. However, obese individuals tend to have a lower tolerance of high-intensity exercise due to discomfort. Furthermore, some obese individuals may compensate for the increased energy expenditure by eating more and/or becoming less active. Recently, both moderate- and high-intensity aerobic interval training have been advocated as alternative approaches. However, it is still uncertain as to which approach is more effective in terms of increasing fat oxidation given the issues with levels of fitness and motivation, and compensatory behaviours. Accordingly, the objectives of this thesis were to compare the influence of moderate- and high-intensity interval training on fat oxidation and eating behaviour in overweight/obese men. Two exercise interventions were undertaken by 10-12 overweight/obese men to compare their responses to study variables, including fat oxidation and eating behaviour during moderate- and high-intensity interval training (MIIT and HIIT). The acute training intervention was a methodological study designed to examine the validity of using exercise intensity from the graded exercise test (GXT) - which measured the intensity that elicits maximal fat oxidation (FATmax) - to prescribe interval training during 30-min MIIT. The 30-min MIIT session involved 5-min repetitions of workloads 20% below and 20% above the FATmax. The acute intervention was extended to involve HIIT in a cross-over design to compare the influence of MIIT and HIIT on eating behaviour using subjective appetite sensation and food preference through the liking and wanting test. The HIIT consisted of 15-sec interval training at 85 %VO2peak interspersed by 15-sec unloaded recovery, with a total mechanical work equal to MIIT. The medium term training intervention was a cross-over 4-week (12 sessions) MIIT and HIIT exercise training with a 6-week detraining washout period. The MIIT sessions consisted of 5-min cycling stages at ±20% of mechanical work at 45 %VO2peak, and the HIIT sessions consisted of repetitive 30-sec work at 90 %VO2peak and 30-sec interval rests, during identical exercise sessions of between 30 and 45 mins. Assessments included a constant-load test (45 %VO2peak for 45 mins) followed by 60-min recovery at baseline and the end of 4-week training, to determine fat oxidation rate. Participants’ responses to exercise were measured using blood lactate (BLa), heart rate (HR) and rating of perceived exertion (RPE) and were measured during the constant-load test and in the first intervention training session of every week during training. Eating behaviour responses were assessed by measuring subjective appetite sensations, liking and wanting and ad libitum energy intake. Results of the acute intervention showed that FATmax is a valid method to estimate VO2 and BLa, but is not valid to estimate HR and RPE in the MIIT session. While the average rate of fat oxidation during 30-min MIIT was comparable with the rate of fat oxidation at FATmax (0.16 ±0.09 and 0.14 ±0.08 g/min, respectively), fat oxidation was significantly higher at minute 25 of MIIT (P≤0.01). In addition, there was no significant difference between MIIT and HIIT in the rate of appetite sensations after exercise, but there was a tendency towards a lower rate of hunger after HIIT. Different intensities of interval exercise also did not affect explicit liking or implicit wanting. Results of the medium-term intervention indicated that current interval training levels did not affect body composition, fasting insulin and fasting glucose. Maximal aerobic capacity significantly increased (P≤0.01) (2.8 and 7.0% after MIIT and HIIT respectively) during GXT, and fat oxidation significantly increased (P≤0.01) (96 and 43% after MIIT and HIIT respectively) during the acute constant-load exercise test. RPE significantly decreased after HIIT greater than MIIT (P≤0.05), and the decrease in BLa was greater during the constant-load test after HIIT than MIIT, but this difference did not reach statistical significance (P=0.09). In addition, following constant-load exercise, exercise-induced hunger and desire to eat decreased after HIIT greater than MIIT but were not significant (p value for desire to eat was 0.07). Exercise-induced liking of high-fat sweet (HFSW) and high-fat non-sweet (HFNS) foods increased after MIIT and decreased after HIIT (p value for HFNS was 0.09). The intervention explained 12.4% of the change in fat intake (p = 0.07). This research is significant in that it confirmed two points in the acute study. While the rate of fat oxidation increased during MIIT, the average rate of fat oxidation during 30-min MIIT was comparable with the rate of fat oxidation at FATmax. In addition, manipulating the intensity of acute interval exercise did not affect appetite sensations and liking and wanting. In the medium-term intervention, constant-load exercise-induced fat oxidation significantly increased after interval training, independent of exercise intensity. In addition, desire to eat, explicit liking for HFNS and fat intake collectively confirmed that MIIT is accompanied by a greater compensation of eating behaviour than HIIT. Findings from this research will assist in developing exercise strategies to provide obese men with various training options. In addition, the finding that overweight/obese men expressed a lower RPE and decreased BLa after HIIT compared with MIIT is contrary to the view that obese individuals may not tolerate high-intensity interval training. Therefore, high-intensity interval training can be advocated among the obese adult male population. Future studies may extend this work by using a longer-term intervention.
Resumo:
Dr Wyatt’s study investigated the complex relationship between vitamin D and melanoma, specifically if vitamin D status is associated with more aggressive melanomas. Exposure to solar ultraviolet radiation is the principal risk factor for melanoma and also the main source of vitamin D. This research found that insufficient vitamin D at time of melanoma diagnosis is significantly associated with poorer prognosis (as defined by tumour thickness). These results will contribute to a more refined public health message concerning melanoma and vitamin D, particularly in Queensland, which has the highest global incidence of melanoma, but vitamin D deficiency is not uncommon.
Resumo:
Many primary immunodeficiency disorders of differing etiologies have been well characterized, and much understanding of immunological processes has been gained by investigating the mechanisms of disease. Here, we have used a whole-genome approach, employing single-nucleotide polymorphism and gene expression microarrays, to provide insight into the molecular etiology of a novel immunodeficiency disorder. Using DNA copy number profiling, we define a hyperploid region on 14q11.2 in the immunodeficiency case associated with the interleukin (IL)-25 locus. This alteration was associated with significantly heightened expression of IL25 following T-cell activation. An associated dominant type 2 helper T cell bias in the immunodeficiency case provides a mechanistic explanation for recurrence of infections by pathogens met by Th1-driven responses. Furthermore, this highlights the capacity of IL25 to alter normal human immune responses.
Resumo:
Alcohol-involved accidents are one of the leading contributors towards high injury rates among Indigenous Australians. However, there is limited information available to inform existing policies to change current rates. The study aims to provide information about the prevalence and the characteristics of such behaviour. Drink driving convictions from 2006-2010 were extracted from the Queensland Department of Justice and Attorney General database. Convictions were regrouped by gender, age, Accessibility/Remoteness Index of Australia classification (using court location) and sentence severity. A number of cross tabulations were carried out to identify relationships between variables. Standardised adjusted residuals were calculated for each cell in order to determine cell differences that contributed to the chi-square test results. Analysis revealed there were 9,323 convictions, of which the majority were for offences by males (77.5%). In relation to age, 52.6% of the convictions were of persons under 25 years of age. Age was significantly different across the five regions for males only (χ2=90.8, p<0.001), with a larger number of convictions in the ‘very remote’ region of persons over 40+ years of age. Increased remoteness was linked with high range BAC convictions for both males (χ2=168.4, p<0.001) and females (χ2=22.5, p=0.004). Monetary penalties were the primary sentence received for both males and females in all regions. The findings identify the Indigenous drink driving conviction rate to be 6 times that of the general Queensland rate and indicate that a multipronged approach is needed, with tailored strategies for remote offenders, young adults and offenders with alcohol misuse and dependency issues. Further attention is warranted in this area of road safety.
Resumo:
It has been reported that poor nutritional status, in the form of weight loss and resulting body mass index (BMI) changes, is an issue in people with Parkinson's disease (PWP). The symptoms resulting from Parkinson's disease (PD) and the side effects of PD medication have been implicated in the aetiology of nutritional decline. However, the evidence on which these claims are based is, on one hand, contradictory, and on the other, restricted primarily to otherwise healthy PWP. Despite the claims that PWP suffer from poor nutritional status, evidence is lacking to inform nutrition-related care for the management of malnutrition in PWP. The aims of this thesis were to better quantify the extent of poor nutritional status in PWP, determine the important factors differentiating the well-nourished from the malnourished and evaluate the effectiveness of an individualised nutrition intervention on nutritional status. Phase DBS: Nutritional status in people with Parkinson's disease scheduled for deep-brain stimulation surgery The pre-operative rate of malnutrition in a convenience sample of people with Parkinson's disease (PWP) scheduled for deep-brain stimulation (DBS) surgery was determined. Poorly controlled PD symptoms may result in a higher risk of malnutrition in this sub-group of PWP. Fifteen patients (11 male, median age 68.0 (42.0 – 78.0) years, median PD duration 6.75 (0.5 – 24.0) years) participated and data were collected during hospital admission for the DBS surgery. The scored PG-SGA was used to assess nutritional status, anthropometric measures (weight, height, mid-arm circumference, waist circumference, body mass index (BMI)) were taken, and body composition was measured using bioelectrical impedance spectroscopy (BIS). Six (40%) of the participants were malnourished (SGA-B) while 53% reported significant weight loss following diagnosis. BMI was significantly different between SGA-A and SGA-B (25.6 vs 23.0kg/m 2, p<.05). There were no differences in any other variables, including PG-SGA score and the presence of non-motor symptoms. The conclusion was that malnutrition in this group is higher than that in other studies reporting malnutrition in PWP, and it is under-recognised. As poorer surgical outcomes are associated with poorer pre-operative nutritional status in other surgeries, it might be beneficial to identify patients at nutritional risk prior to surgery so that appropriate nutrition interventions can be implemented. Phase I: Nutritional status in community-dwelling adults with Parkinson's disease The rate of malnutrition in community-dwelling adults (>18 years) with Parkinson's disease was determined. One hundred twenty-five PWP (74 male, median age 70.0 (35.0 – 92.0) years, median PD duration 6.0 (0.0 – 31.0) years) participated. The scored PG-SGA was used to assess nutritional status, anthropometric measures (weight, height, mid-arm circumference (MAC), calf circumference, waist circumference, body mass index (BMI)) were taken. Nineteen (15%) of the participants were malnourished (SGA-B). All anthropometric indices were significantly different between SGA-A and SGA-B (BMI 25.9 vs 20.0kg/m2; MAC 29.1 – 25.5cm; waist circumference 95.5 vs 82.5cm; calf circumference 36.5 vs 32.5cm; all p<.05). The PG-SGA score was also significantly lower in the malnourished (2 vs 8, p<.05). The nutrition impact symptoms which differentiated between well-nourished and malnourished were no appetite, constipation, diarrhoea, problems swallowing and feel full quickly. This study concluded that malnutrition in community-dwelling PWP is higher than that documented in community-dwelling elderly (2 – 11%), yet is likely to be under-recognised. Nutrition impact symptoms play a role in reduced intake. Appropriate screening and referral processes should be established for early detection of those at risk. Phase I: Nutrition assessment tools in people with Parkinson's disease There are a number of validated and reliable nutrition screening and assessment tools available for use. None of these tools have been evaluated in PWP. In the sample described above, the use of the World Health Organisation (WHO) cut-off (≤18.5kg/m2), age-specific BMI cut-offs (≤18.5kg/m2 for under 65 years, ≤23.5kg/m2 for 65 years and older) and the revised Mini-Nutritional Assessment short form (MNA-SF) were evaluated as nutrition screening tools. The PG-SGA (including the SGA classification) and the MNA full form were evaluated as nutrition assessment tools using the SGA classification as the gold standard. For screening, the MNA-SF performed the best with sensitivity (Sn) of 94.7% and specificity (Sp) of 78.3%. For assessment, the PG-SGA with a cut-off score of 4 (Sn 100%, Sp 69.8%) performed better than the MNA (Sn 84.2%, Sp 87.7%). As the MNA has been recommended more for use as a nutrition screening tool, the MNA-SF might be more appropriate and take less time to complete. The PG-SGA might be useful to inform and monitor nutrition interventions. Phase I: Predictors of poor nutritional status in people with Parkinson's disease A number of assessments were conducted as part of the Phase I research, including those for the severity of PD motor symptoms, cognitive function, depression, anxiety, non-motor symptoms, constipation, freezing of gait and the ability to carry out activities of daily living. A higher score in all of these assessments indicates greater impairment. In addition, information about medical conditions, medications, age, age at PD diagnosis and living situation was collected. These were compared between those classified as SGA-A and as SGA-B. Regression analysis was used to identify which factors were predictive of malnutrition (SGA-B). Differences between the groups included disease severity (4% more severe SGA-A vs 21% SGA-B, p<.05), activities of daily living score (13 SGA-A vs 18 SGA-B, p<.05), depressive symptom score (8 SGA-A vs 14 SGA-B, p<.05) and gastrointestinal symptoms (4 SGA-A vs 6 SGA-B, p<.05). Significant predictors of malnutrition according to SGA were age at diagnosis (OR 1.09, 95% CI 1.01 – 1.18), amount of dopaminergic medication per kg body weight (mg/kg) (OR 1.17, 95% CI 1.04 – 1.31), more severe motor symptoms (OR 1.10, 95% CI 1.02 – 1.19), less anxiety (OR 0.90, 95% CI 0.82 – 0.98) and more depressive symptoms (OR 1.23, 95% CI 1.07 – 1.41). Significant predictors of a higher PG-SGA score included living alone (β=0.14, 95% CI 0.01 – 0.26), more depressive symptoms (β=0.02, 95% CI 0.01 – 0.02) and more severe motor symptoms (OR 0.01, 95% CI 0.01 – 0.02). More severe disease is associated with malnutrition, and this may be compounded by lack of social support. Phase II: Nutrition intervention Nineteen of the people identified in Phase I as requiring nutrition support were included in Phase II, in which a nutrition intervention was conducted. Nine participants were in the standard care group (SC), which received an information sheet only, and the other 10 participants were in the intervention group (INT), which received individualised nutrition information and weekly follow-up. INT gained 2.2% of starting body weight over the 12 week intervention period resulting in significant increases in weight, BMI, mid-arm circumference and waist circumference. The SC group gained 1% of starting weight over the 12 weeks which did not result in any significant changes in anthropometric indices. Energy and protein intake (18.3kJ/kg vs 3.8kJ/kg and 0.3g/kg vs 0.15g/kg) increased in both groups. The increase in protein intake was only significant in the SC group. The changes in intake, when compared between the groups, were no different. There were no significant changes in any motor or non-motor symptoms or in "off" times or dyskinesias in either group. Aspects of quality of life improved over the 12 weeks as well, especially emotional well-being. This thesis makes a significant contribution to the evidence base for the presence of malnutrition in Parkinson's disease as well as for the identification of those who would potentially benefit from nutrition screening and assessment. The nutrition intervention demonstrated that a traditional high protein, high energy approach to the management of malnutrition resulted in improved nutritional status and anthropometric indices with no effect on the presence of Parkinson's disease symptoms and a positive effect on quality of life.
Resumo:
Traditionally, infectious diseases and under-nutrition have been considered major health problems in Sri Lanka with little attention paid to obesity and associated non-communicable diseases (NCDs). However, the recent Sri Lanka Diabetes and Cardiovascular Study (SLDCS) reported the epidemic level of obesity, diabetes and metabolic syndrome. Moreover, obesity-associated NCDs is the leading cause of death in Sri Lanka and there is an exponential increase in hospitalization due to NCDs adversely affecting the development of the country. Despite Sri Lanka having a very high prevalence of NCDs and associated mortality, little is known about the causative factors for this burden. It is widely believed that the global NCD epidemic is associated with recent lifestyle changes, especially dietary factors. In the absence of sufficient data on dietary habits in Sri Lanka, successful interventions to manage these serious health issues would not be possible. In view of the current situation the dietary survey was undertaken to assess the intakes of energy, macro-nutrients and selected other nutrients with respect to socio demographic characteristics and the nutritional status of Sri Lankan adults especially focusing on obesity. Another aim of this study was to develop and validate a culturally specific food frequency questionnaire (FFQ) to assess dietary risk factors of NCDs in Sri Lankan adults. Data were collected from a subset of the national SLDCS using a multi-stage, stratified, random sampling procedure (n=500). However, data collection in the SLDCS was affected by the prevailing civil war which resulted in no data being collected from Northern and Eastern provinces. To obtain a nationally representative sample, additional subjects (n=100) were later recruited from the two provinces using similar selection criteria. Ethical Approval for this study was obtained from the Ethical Review Committee, Faculty of Medicine, University of Colombo, Sri Lanka and informed consent was obtained from the subjects before data were collected. Dietary data were obtained using the 24-h Dietary Recall (24HDR) method. Subjects were asked to recall all foods and beverages, consumed over the previous 24-hour period. Respondents were probed for the types of foods and food preparation methods. For the FFQ validation study, a 7-day weight diet record (7-d WDR) was used as the reference method. All foods recorded in the 24 HDR were converted into grams and then intake of energy and nutrients were analysed using NutriSurvey 2007 (EBISpro, Germany) which was modified for Sri Lankan food recipes. Socio-demographic details and body weight perception were collected from interviewer-administrated questionnaire. BMI was calculated and overweight (BMI ≥23 kg.m-2), obesity (BMI ≥25 kg.m-2) and abdominal obesity (Men: WC ≥ 90 cm; Women: WC ≥ 80 cm) were categorized according to Asia-pacific anthropometric cut-offs. The SPSS v. 16 for Windows and Minitab v10 were used for statistical analysis purposes. From a total of 600 eligible subjects, 491 (81.8%) participated of whom 34.5% (n=169) were males. Subjects were well distributed among different socio-economic parameters. A total of 312 different food items were recorded and nutritionists grouped similar food items which resulted in a total of 178 items. After performing step-wise multiple regression, 93 foods explained 90% of the variance for total energy intake, carbohydrates, protein, total fat and dietary fibre. Finally, 90 food items and 12 photographs were selected. Seventy-seven subjects completed (response rate = 65%) the FFQ and 7-day WDR. Estimated mean energy intake (SD) from FFQ (1794±398 kcal) and 7DWR (1698±333 kcal, P<0.001) was significantly different due to a significant overestimation of carbohydrate (~10 g/d, P<0.001) and to some extent fat (~5 g/d, NS). Significant positive correlations were found between the FFQ and 7DWR for energy (r = 0.39), carbohydrate (r = 0.47), protein (r = 0.26), fat (r =0.17) and dietary fiber (r = 0.32). Bland-Altman graphs indicated fairly good agreement between methods with no relationship between bias and average intake of each nutrient examined. The findings from the nutrition survey showed on average, Sri Lankan adults consumed over 14 portions of starch/d; moreover, males consumed 5 more portions of cereal than females. Sri Lankan adults consumed on average 3.56 portions of added sugars/d. Moreover, mean daily intake of fruit (0.43) and vegetable (1.73) portions was well below minimum dietary recommendations (fruits 2 portions/d; vegetables 3 portions/d). The total fruit and vegetable intake was 2.16 portions/d. Daily consumption of meat or alternatives was 1.75 portions and the sum of meat and pulses was 2.78 portions/d. Starchy foods were consumed by all participants and over 88% met the minimum daily recommendations. Importantly, nearly 70% of adults exceeded the maximum daily recommendation for starch (11portions/d) and a considerable proportion consumed larger numbers of starch servings daily, particularly men. More than 12% of men consumed over 25 starch servings/d. In contrast to their starch consumption, participants reported very low intakes of other food groups. Only 11.6%, 2.1% and 3.5% of adults consumed the minimum daily recommended servings of vegetables, fruits, and fruits and vegetables combined, respectively. Six out of ten adult Sri Lankans sampled did not consume any fruits. Milk and dairy consumption was extremely low; over a third of the population did not consume any dairy products and less than 1% of adults consumed 2 portions of dairy/d. A quarter of Sri Lankans did not report consumption of meat and pulses. Regarding protein consumption, 36.2% attained the minimum Sri Lankan recommendation for protein; and significantly more men than women achieved the recommendation of ≥3 servings of meat or alternatives daily (men 42.6%, women 32.8%; P<0.05). Over 70% of energy was derived from carbohydrates (Male:72.8±6.4%, Female:73.9±6.7%), followed by fat (Male:19.9±6.1%, Female:18.5±5.7%) and proteins (Male:10.6±2.1%, Female:10.9±5.6%). The average intake of dietary fiber was 21.3 g/day and 16.3 g/day for males and females, respectively. There was a significant difference in nutritional intake related to ethnicities, areas of residence, education levels and BMI categories. Similarly, dietary diversity was significantly associated with several socio-economic parameters among Sri Lankan adults. Adults with BMI ≥25 kg.m-2 and abdominally obese Sri Lankan adults had the highest diet diversity values. Age-adjusted prevalence (95% confidence interval) of overweight, obesity, and abdominal obesity among Sri Lankan adults were 17.1% (13.8-20.7), 28.8% (24.8-33.1), and 30.8% (26.8-35.2), respectively. Men, compared with women, were less overweight, 14.2% (9.4-20.5) versus 18.5% (14.4-23.3), P = 0.03, less obese, 21.0% (14.9-27.7) versus 32.7% (27.6-38.2), P < .05; and less abdominally obese, 11.9% (7.4-17.8) versus 40.6% (35.1-46.2), P < .05. Although, prevalence of obesity has reached to epidemic level body weight misperception was common among Sri Lankan adults. Two-thirds of overweight males and 44.7% of females considered themselves as in "about right weight". Over one third of both male and female obese subjects perceived themselves as "about right weight" or "underweight". Nearly 32% of centrally obese men and women perceived that their waist circumference is about right. People who perceived overweight or very overweight (n = 154) only 63.6% tried to lose their body weight (n = 98), and quarter of adults seek advices from professionals (n = 39). A number of important conclusions can be drawn from this research project. Firstly, the newly developed FFQ is an acceptable tool for assessing the nutrient intake of Sri Lankans and will assist proper categorization of individuals by dietary exposure. Secondly, a substantial proportion of the Sri Lankan population does not consume a varied and balanced diet, which is suggestive of a close association between the nutrition-related NCDs in the country and unhealthy eating habits. Moreover, dietary diversity is positively associated with several socio-demographic characteristics and obesity among Sri Lankan adults. Lastly, although obesity is a major health issue among Sri Lankan adults, body weight misperception was common among underweight, healthy weight, overweight, and obese adults in Sri Lanka. Over 2/3 of overweight and 1/3 of obese Sri Lankan adults believe that they are in "right weight" or "under-weight" categories.
Resumo:
Purpose: The objective of the study was to assess the bioequivalence of two tablet formulations of capecitabine and to explore the effect of age, gender, body surface area and creatinine clearance on the systemic exposure to capecitabine and its metabolites. Methods: The study was designed as an open, randomized two-way crossover trial. A single oral dose of 2000 mg capecitabine was administered on two separate days to 25 patients with solid tumors. On one day, the patients received four 500-mg tablets of formulation B (test formulation) and on the other day, four 500-mg tablets of formulation A (reference formulation). The washout period between the two administrations was between 2 and 8 days. After each administration, serial blood and urine samples were collected for up to 12 and 24 h, respectively. Unchanged capecitabine and its metabolites were determined in plasma using LC/MS-MS and in urine by NMRS. Results: Based on the primary pharmacokinetic parameter, AUC(0-∞) of 5'-DFUR, equivalence was concluded for the two formulations, since the 90% confidence interval of the estimate of formulation B relative to formulation A of 97% to 107% was within the acceptance region 80% to 125%. There was no clinically significant difference between the t(max) for the two formulations (median 2.1 versus 2.0 h). The estimate for C(max) was 111% for formulation B compared to formulation A and the 90% confidence interval of 95% to 136% was within the reference region 70% to 143%. Overall, these results suggest no relevant difference between the two formulations regarding the extent to which 5'-DFUR reached the systemic circulation and the rate at which 5'-DFUR appeared in the systemic circulation. The overall urinary excretions were 86.0% and 86.5% of the dose, respectively, and the proportion recovered as each metabolite was similar for the two formulations. The majority of the dose was excreted as FBAL (61.5% and 60.3%), all other chemical species making a minor contribution. Univariate and multivariate regression analysis to explore the influence of age, gender, body surface area and creatinine clearance on the log-transformed pharmacokinetic parameters AUC(0-∞) and C(max) of capecitabine and its metabolites revealed no clinically significant effects. The only statistically significant results were obtained for AUC(0-∞) and C(max) of intact drug and for C(max) of FBAL, which were higher in females than in males. Conclusion: The bioavailability of 5'-DFUR in the systemic circulation was practically identical after administration of the two tablet formulations. Therefore, the two formulations can be regarded as bioequivalent. The variables investigated (age, gender, body surface area, and creatinine clearance) had no clinically significant effect on the pharmacokinetics of capecitabine or its metabolites.
Resumo:
Purpose To design and manufacture lenses to correct peripheral refraction along the horizontal meridian and to determine whether these resulted in noticeable improvements in visual performance. Method Subjective refraction of a low myope was determined on the basis of best peripheral detection acuity along the horizontal visual field out to ±30° for both horizontal and vertical gratings. Subjective refraction was compared to objective refractions using a COAS-HD aberrometer. Special lenses were made to correct peripheral refraction, based on designs optimized with and without smoothing across a 3 mm diameter square aperture. Grating detection was retested with these lenses. Contrast thresholds of 1.25’ spots were determined across the field for the conditions of best correction, on-axis correction, and the special lenses. Results The participant had high relative peripheral hyperopia, particularly in the temporal visual field (maximum 2.9 D). There were differences > 0.5D between subjective and objective refractions at a few field angles. On-axis correction reduced peripheral detection acuity and increased peripheral contrast threshold in the peripheral visual field, relative to the best correction, by up to 0.4 and 0.5 log units, respectively. The special lenses restored most of the peripheral vision, although not all at angles to ±10°, and with the lens optimized with aperture-smoothing possibly giving better vision than the lens optimized without aperture-smoothing at some angles. Conclusion It is possible to design and manufacture lenses to give near optimum peripheral visual performance to at least ±30° along one visual field meridian. The benefit of such lenses is likely to be manifest only if a subject has a considerable relative peripheral refraction, for example of the order of 2 D.
Resumo:
Purpose: Inaccurate accommodation during nearwork and subsequent accommodative hysteresis may influence myopia development. Myopia is highly prevalent in Singapore; an untested theory is that Chinese children are prone to these accommodation characteristics. We measured the accuracy of accommodation responses during and nearwork-induced transient myopia (NITM) after periods spent reading Chinese and English texts. Methods: Refractions of 40 emmetropic and 43 myopic children were measured with a free-space autorefractor for four reading tasks of 10-minute durations: Chinese (SimSun, 10.5 points) and English (Times New Roman, 12 points) texts at 25 cm and 33 cm. Accuracy was obtained by subtracting accommodation response from accommodation demand. Nearwork-induced transient myopia was obtained by subtracting pretask distance refraction from posttask refraction, and regression was determined as the time for the posttask refraction to return to pretask levels. Results: There were significant, but small, effects of text type (Chinese, 0.97 ± 0.32 diopters [D] vs. English, 1.00 ± 0.37 D; F1,1230 = 7.24, p = 0.007) and reading distance (33 cm, 1.01 ± 0.30 D vs. 25 cm, 0.97 ± 0.39 D; F1,1230 = 7.74, p = 0.005) on accommodation accuracy across all participants. Accuracy was similar for emmetropic and myopic children across all reading tasks. Neither text type nor reading distance had significant effects on NITM or its regression. Myopes had greater NITM (by 0.07 D) (F1,81 = 5.05, p = 0.03) that took longer (by 50s) (F1,81 = 31.08, p < 0.01) to dissipate. Conclusions: Reading Chinese text caused smaller accommodative lags than reading English text, but the small differences were not clinically significant. Myopic children had significantly greater NITM and longer regression than emmetropic children for both texts. Whether differences in NITM are a cause or consequence of myopia cannot be answered from this study.