936 resultados para Nematode burden


Relevância:

10.00% 10.00%

Publicador:

Resumo:

International research on prisoners demonstrates poor health outcomes, including chronic disease, with the overall burden to the community high. Prisoners are predominantly male and young. In Australia, the average incarceration length is 3 years, sufficient to impact long term health, including nutrition. Food in prisons is highly controlled, yet gaps exist in policy. In most Western countries prisons promote healthy foods, often incongruent with prisoner expectations or wants. Few studies have been conducted on dietary intakes during incarceration in relation to food policy. In this study detailed diet histories were collected on 120/945 men (mean age = 32 years), in a high-secure prison. Intakes were verified via individual purchase records, mealtime observations, and audits of food preparation, purchasing and holdings. Physical measurements (including fasting bloods) were taken and medical records reviewed. Results showed the standard food provided consistent with current dietary guidelines, however limited in menu choice. Diet histories revealed self-funded foods contributing 1–63% of energy (mean = 30%), 0–83% sugar (mean = 38%), 1–77% saturated fats (mean = 31%) and 1–59% sodium (mean = 23%). High levels of modification to food provided was found using minimal cooking amenities and inclusion of self-funded foods and/or foods retained from previous meals. Medical records and physical measurements confirmed markers of chronic disease. This study highlights the need to establish clear guidelines on all food available in prisons if chronic disease risk reduction is a goal. This study has also supported evidenced based food and nutrition policy including menu choice, food quality, quantity and safety as well as type and access to self-funded foods.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Alcohol-related mortality and morbidity represents a substantial financial burden to communities across the world. In Australia, conservative estimates place the societal cost (2004-2005) for alcohol abuse at approximately 15.3 billion dollar annually (Collins & Lapsley, 2008). Research has found that adolescence and young adulthood is a peak period for heavy episodic alcohol consumption, with over a third of all people aged 14-19 years having been at risk for acute alcohol-related harm at least once in the prior 12 months (Australian Institute of Health and Welfare [AIHW], (2008). While excessive alcohol consumption has, for a long time, been seen as a male problem; there has been a gradual shift towards a social acceptance of female drinking which has resulted in a diminishing gap in drinking quantity and style between men and women (Roche & Deehan, 2002). There is substantial evidence that women are at higher risk than men for detrimental physical, medical, social and psychological effects of at-risk alcohol consumption (Epstein, et al., 2007). Research outlining the epidemiology of women’s substance use emphasises the need for further examination into influences that may be gender specific and culturally defined (Matheson, 2008; Measham & Ostergaard, 2009). As such, there is a need to utilise female perspectives in examining alcohol consumption and alcohol related problems in order to reflect a more balanced and competent version of drinking in today’s culture (Allamani, 2008). Currently a number of reasons are offered to explain the observed trends including reduction in traditional sanctions and social norms against women drinking, financial emancipation, cultural shift and targeted advertising to name a few. However, there is yet comparatively little research examining drinking by young women in order to understand this ‘new’ drinking pattern. Most research into alcohol use and subsequent intervention and prevention campaigns have been based on male perceptions and constructs of drinking. While such approaches have provided important information regarding the quantity and frequency of alcohol consumption by women, they do not address the important question of why. To understand the why, research needs to explore the difference between males and females in the meaning of the behaviour and the place that drinking holds to them.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Parametric and generative modelling methods are ways in which computer models are made more flexible, and of formalising domain-specific knowledge. At present, no open standard exists for the interchange of parametric and generative information. The Industry Foundation Classes (IFC) which are an open standard for interoperability in building information models is presented as the base for an open standard in parametric modelling. The advantage of allowing parametric and generative representations are that the early design process can allow for more iteration and changes can be implemented quicker than with traditional models. This paper begins with a formal definition of what constitutes to be parametric and generative modelling methods and then proceeds to describe an open standard in which the interchange of components could be implemented. As an illustrative example of generative design, Frazer’s ‘Reptiles’ project from 1968 is reinterpreted.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Diet Induced Thermogenesis (DIT) is the energy expended consequent to meal consumption, and reflects the energy required for the processing and digestion of food consumed throughout each day. Although DIT is the total energy expended across a day in digestive processes to a number of meals, most studies measure thermogenesis in response to a single meal (Meal Induced Thermogenesis: MIT) as a representation of an individual’s thermogenic response to acute food ingestion. As a component of energy expenditure, DIT may have a contributing role in weight gain and weight loss. While the evidence is inconsistent, research has tended to reveal a suppressed MIT response in obese compared to lean individuals, which identifies individuals with an efficient storage of food energy, hence a greater tendency for weight gain. Appetite is another factor regulating body weight through its influence on energy intake. Preliminary research has shown a potential link between MIT and postprandial appetite as both are responses to food ingestion and have a similar response dependent upon the macronutrient content of food. There is a growing interest in understanding how both MIT and appetite are modified with changes in diet, activity levels and body size. However, the findings from MIT research have been highly inconsistent, potentially due to the vastly divergent protocols used for its measurement. Therefore, the main theme of this thesis was firstly, to address some of the methodological issues associated with measuring MIT. Additionally this thesis aimed to measure postprandial appetite simultaneously to MIT to test for any relationships between these meal-induced variables and to assess changes that occur in MIT and postprandial appetite during periods of energy restriction (ER) and following weight loss. Two separate studies were conducted to achieve these aims. Based on the increasing prevalence of obesity, it is important to develop accurate methodologies for measuring the components potentially contributing to its development and to understand the variability within these variables. Therefore, the aim of Study One was to establish a protocol for measuring the thermogenic response to a single test meal (MIT), as a representation of DIT across a day. This was done by determining the reproducibility of MIT with a continuous measurement protocol and determining the effect of measurement duration. The benefit of a fixed resting metabolic rate (RMR), which is a single measure of RMR used to calculate each subsequent measure of MIT, compared to separate baseline RMRs, which are separate measures of RMR measured immediately prior to each MIT test meal to calculate each measure of MIT, was also assessed to determine the method with greater reproducibility. Subsidiary aims were to measure postprandial appetite simultaneously to MIT, to determine its reproducibility between days and to assess potential relationships between these two variables. Ten healthy individuals (5 males, 5 females, age = 30.2 ± 7.6 years, BMI = 22.3 ± 1.9 kg/m2, %Fat Mass = 27.6 ± 5.9%) undertook three testing sessions within a 1-4 week time period. During the first visit, participants had their body composition measured using DXA for descriptive purposes, then had an initial 30-minute measure of RMR to familiarise them with the testing and to be used as a fixed baseline for calculating MIT. During the second and third testing sessions, MIT was measured. Measures of RMR and MIT were undertaken using a metabolic cart with a ventilated hood to measure energy expenditure via indirect calorimetry with participants in a semi-reclined position. The procedure on each MIT test day was: 1) a baseline RMR measured for 30 minutes, 2) a 15-minute break in the measure to consume a standard 576 kcal breakfast (54.3% CHO, 14.3% PRO, 31.4% FAT), comprising muesli, milk toast, butter, jam and juice, and 3) six hours of measuring MIT with two, ten-minute breaks at 3 and 4.5 hours for participants to visit the bathroom. On the MIT test days, pre and post breakfast then at 45-minute intervals, participants rated their subjective appetite, alertness and comfort on visual analogue scales (VAS). Prior to each test, participants were required to be fasted for 12 hours, and have undertaken no high intensity physical activity for the previous 48 hours. Despite no significant group changes in the MIT response between days, individual variability was high with an average between-day CV of 33%, which was not significantly improved by the use of a fixed RMR to 31%. The 95% limits of agreements which ranged from 9.9% of energy intake (%EI) to -10.7%EI with the baseline RMRs and between 9.6%EI to -12.4%EI with the fixed RMR, indicated very large changes relative to the size of the average MIT response (MIT 1: 8.4%EI, 13.3%EI; MIT 2: 8.8%EI, 14.7%EI; baseline and fixed RMRs respectively). After just three hours, the between-day CV with the baseline RMR was 26%, which may indicate an enhanced MIT reproducibility with shorter measurement durations. On average, 76, 89, and 96% of the six-hour MIT response was completed within three, four and five hours, respectively. Strong correlations were found between MIT at each of these time points and the total six-hour MIT (range for correlations r = 0.990 to 0.998; P < 0.01). The reproducibility of the proportion of the six-hour MIT completed at 3, 4 and 5 hours was reproducible (between-day CVs ≤ 8.5%). This indicated the suitability to use shorter durations on repeated occasions and a similar percent of the total response to be completed. There was a lack of strong evidence of any relationship between the magnitude of the MIT response and subjective postprandial appetite. Given a six-hour protocol places a considerable burden on participants, these results suggests that a post-meal measurement period of only three hours is sufficient to produce valid information on the metabolic response to a meal. However while there was no mean change in MIT between test days, individual variability was large. Further research is required to better understand which factors best explain the between-day variability in this physiological measure. With such a high prevalence of obesity, dieting has become a necessity to reduce body weight. However, during periods of ER, metabolic and appetite adaptations can occur which may impede weight loss. Understanding how metabolic and appetite factors change during ER and weight loss is important for designing optimal weight loss protocols. The purpose of Study Two was to measure the changes in the MIT response and subjective postprandial appetite during either continuous (CONT) or intermittent (INT) ER and following post diet energy balance (post-diet EB). Thirty-six obese male participants were randomly assigned to either the CONT (Age = 38.6 ± 7.0 years, weight = 109.8 ± 9.2 kg, % fat mass = 38.2 ± 5.2%) or INT diet groups (Age = 39.1 ± 9.1 years, weight = 107.1 ± 12.5 kg, % fat mass = 39.6 ± 6.8%). The study was divided into three phases: a four-week baseline (BL) phase where participants were provided with a diet to maintain body weight, an ER phase lasting either 16 (CONT) or 30 (INT) weeks, where participants were provided with a diet which supplied 67% of their energy balance requirements to induce weight loss and an eight-week post-diet EB phase, providing a diet to maintain body weight post weight loss. The INT ER phase was delivered as eight, two-week blocks of ER interspersed with two-week blocks designed to achieve weight maintenance. Energy requirements for each phase were predicted based on measured RMR, and adjusted throughout the study to account for changes in RMR. All participants completed MIT and appetite tests during BL and the ER phase. Nine CONT and 15 INT participants completed the post-diet EB MIT and 14 INT and 15 CONT participants completed the post-diet EB appetite tests. The MIT test day protocol was as follows: 1) a baseline RMR measured for 30 minutes, 2) a 15-minute break in the measure to consume a standard breakfast meal (874 kcal, 53.3% CHO, 14.5% PRO, 32.2% FAT), and 3) three hours of measuring MIT. MIT was calculated as the energy expenditure above the pre-meal RMR. Appetite test days were undertaken on a separate day using the same 576 kcal breakfast used in Study One. VAS were used to assess appetite pre and post breakfast, at one hour post breakfast then a further three times at 45-minute intervals. Appetite ratings were calculated for hunger and fullness as both the intra-meal change in appetite and the AUC. The three-hour MIT response at BL, ER and post-diet EB respectively were 5.4 ± 1.4%EI, 5.1 ± 1.3%EI and 5.0 ± 0.8%EI for the CONT group and 4.4 ± 1.0%EI, 4.7 ± 1.0%EI and 4.8 ± 0.8%EI for the INT group. Compared to BL, neither group had significant changes in their MIT response during ER or post-diet EB. There were no significant time by group interactions (p = 0.17) indicating a similar response to ER and post-diet EB in both groups. Contrary to what was hypothesised, there was a significant increase in postprandial AUC fullness in response to ER in both groups (p < 0.05). However, there were no significant changes in any of the other postprandial hunger or fullness variables. Despite no changes in MIT in both the CONT or INT group in response to ER or post-diet EB and only a minor increase in postprandial AUC fullness, the individual changes in MIT and postprandial appetite in response to ER were large. However those with the greatest MIT changes did not have the greatest changes in postprandial appetite. This study shows that postprandial appetite and MIT are unlikely to be altered during ER and are unlikely to hinder weight loss. Additionally, there were no changes in MIT in response to weight loss, indicating that body weight did not influence the magnitude of the MIT response. There were large individual changes in both variables, however further research is required to determine whether these changes were real compensatory changes to ER or simply between-day variation. Overall, the results of this thesis add to the current literature by showing the large variability of continuous MIT measurements, which make it difficult to compare MIT between groups and in response to diet interventions. This thesis was able to provide evidence to suggest that shorter measures may provide equally valid information about the total MIT response and can therefore be utilised in future research in order to reduce the burden of long measurements durations. This thesis indicates that MIT and postprandial subjective appetite are most likely independent of each other. This thesis also shows that, on average, energy restriction was not associated with compensatory changes in MIT and postprandial appetite that would have impeded weight loss. However, the large inter-individual variability supports the need to examine individual responses in more detail.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A longitudinal study of grieving in family caregivers of people with dementia Recent research into dementia has identified the long term impact that the role of care giving for a relative with dementia has on family members This is largely due to the cognitive decline that characterises dementia and the losses that can be directly attributed to this. These losses include loss of memories, relationships and intimacy, and are often ambiguous so that the grief that accompanies them is commonly not recognised or acknowledged. The role and impact of pre-death or anticipatory grief has not previously been widely considered as a factor influencing health and well-being of family caregivers. Studies of grief in caregivers of a relative with dementia have concluded that grief is one of the greatest barriers to care giving and is a primary determinant of caregiver well-being. The accumulation of losses, in conjunction with experiences unique to dementia care giving, place family caregivers at risk of complicated grief. This occurs when integration of the death does not take place following bereavement and has been associated with a range of negative health outcomes. The aim of this research was to determine the influence of grief, in addition to other factors representing both positive and negative aspects of the role, on the health related quality of life of family caregivers of people with dementia, prior to and following the death of their relative with dementia. An exploratory research project underpinned by a conceptual framework of caregivers’ adaptation in the context of subjective appraisal of the strains and gains in their role was undertaken. The research comprised three studies. Study 1 was a scoping study that involved a series of semi-structured interviews with thirteen participants who were family caregivers of people with severe dementia or whose relative with dementia had died in the previous twelve months. The results of this study in conjunction with factors identified in the literature informed data collection for the further studies. Study 2 was a cross sectional survey of fifty caregivers recruited when their relative was in the moderate to severe stage of dementia. This study provided the baseline data for Study 3, a prospective cohort follow up study. Study 3 consisted of seventeen participants followed up at two time points after the death of their relative with dementia: six weeks and then six months following the death of the relative with dementia. The scoping study indicated that differences in appraisal of the care giving role and encounters with health professionals were related to levels of grief of caregivers prior to and following the death of the relative with dementia. This was supported in the baseline and follow up studies. In the baseline study, after adjusting for all variables in multivariate regression models, subjective appraisal of burden was found to make a significant contribution (p<.05) to mental health related quality of life. The two dependent variables, anticipatory grief and mental health related quality of life, were significantly (p<.01) correlated at a bivariate level. In the follow up study, linear mixed modelling and multiple regression analysis of data found that subjective appraisal of burden and resilience were significantly associated (p<.05 and p<.01, respectively) with mental health related quality of life over time. In addition, bereavement and complicated grief were significantly associated (p<.05) with mental health following the death of the relative. In this study social support and satisfaction with end of life care were found to be statistically associated (p<.05) with physical health related quality of life over time. The strong relationship between grief of caregivers and their health related quality of life over the entire care giving trajectory and period following the death of their relative highlights the urgent need for further research and interventions in this area. Overall results indicate that addressing the risk and protective factors including subjective appraisal of their care giving role, resilience, social support and satisfaction with end of life care of their relative, has the potential to both ameliorate negative health outcomes and to promote improved health for these caregivers. This research provides important information for development of targeted and appropriate interventions that aim to promote resilience and reduce the personal burden on caregivers of people with dementia.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Alterations in cognitive function are characteristic of the aging process in humans and other animals. However, the nature of these age related changes in cognition is complex and is likely to be influenced by interactions between genetic predispositions and environmental factors resulting in dynamic fluctuations within and between individuals. These inter and intra-individual fluctuations are evident in both so-called normal cognitive aging and at the onset of cognitive pathology. Mild Cognitive Impairment (MCI), thought to be a prodromal phase of dementia, represents perhaps the final opportunity to mitigate cognitive declines that may lead to terminal conditions such as dementia. The prognosis for people with MCI is mixed with the evidence suggesting that many will remain stable within 10-years of diagnosis, many will improve, and many will transition to dementia. If the characteristics of people who do not progress to dementia from MCI can be identified and replicated in others it may be possible to reduce or delay dementia onset, thus reducing a growing personal and public health burden. Furthermore, if MCI onset can be prevented or delayed, the burden of cognitive decline in aging populations worldwide may be reduced. A cognitive domain that is sensitive to the effects of advancing age, and declines in which have been shown to presage the onset of dementia in MCI patients, is executive function. Moreover, environmental factors such as diet and physical activity have been shown to affect performance on tests of executive function. For example, improvements in executive function have been demonstrated as a result of increased aerobic and anaerobic physical activity and, although the evidence is not as strong, findings from dietary interventions suggest certain nutrients may preserve or improve executive functions in old age. These encouraging findings have been demonstrated in older adults with MCI and their non-impaired peers. However, there are some gaps in the literature that need to be addressed. For example, little is known about the effect on cognition of an interaction between diet and physical activity. Both are important contributors to health and wellbeing, and a growing body of evidence attests to their importance in mental and cognitive health in aging individuals. Yet physical activity and diet are rarely considered together in the context of cognitive function. There is also little known about potential underlying biological mechanisms that might explain the physical activity/diet/cognition relationship. The first aim of this program of research was to examine the individual and interactive role of physical activity and diet, specifically long chain polyunsaturated fatty acid consumption(LCn3) as predictors of MCI status. The second aim is to examine executive function in MCI in the context of the individual and interactive effects of physical activity and LCn3.. A third aim was to explore the role of immune and endocrine system biomarkers as possible mediators in the relationship between LCn3, physical activity and cognition. Study 1a was a cross-sectional analysis of MCI status as a function of erythrocyte proportions of an interaction between physical activity and LCn3. The marine based LCn3s eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA) have both received support in the literature as having cognitive benefits, although comparisons of the relative benefits of EPA or DHA, particularly in relation to the aetiology of MCI, are rare. Furthermore, a limited amount of research has examined the cognitive benefits of physical activity in terms of MCI onset. No studies have examined the potential interactive benefits of physical activity and either EPA or DHA. Eighty-four male and female adults aged 65 to 87 years, 50 with MCI and 34 without, participated in Study 1a. A logistic binary regression was conducted with MCI status as a dependent variable, and the individual and interactive relationships between physical activity and either EPA or DHA as predictors. Physical activity was measured using a questionnaire and specific physical activity categories were weighted according to the metabolic equivalents (METs) of each activity to create a physical activity intensity index (PAI). A significant relationship was identified between MCI outcome and the interaction between the PAI and EPA; participants with a higher PAI and higher erythrocyte proportions of EPA were more likely to be classified as non-MCI than their less active peers with less EPA. Study 1b was a randomised control trial using the participants from Study 1a who were identified with MCI. Given the importance of executive function as a determinant of progression to more severe forms of cognitive impairment and dementia, Study 1b aimed to examine the individual and interactive effect of physical activity and supplementation with either EPA or DHA on executive function in a sample of older adults with MCI. Fifty male and female participants were randomly allocated to supplementation groups to receive 6-months of supplementation with EPA, or DHA, or linoleic acid (LA), a long chain polyunsaturated omega-6 fatty acid not known for its cognitive enhancing properties. Physical activity was measured using the PAI from Study 1a at baseline and follow-up. Executive function was measured using five tests thought to measure different executive function domains. Erythrocyte proportions of EPA and DHA were higher at follow-up; however, PAI was not significantly different. There was also a significant improvement in three of the five executive function tests at follow-up. However, regression analyses revealed that none of the variance in executive function at follow-up was predicted by EPA, DHA, PAI, the EPA by PAI interaction, or the DHA by PAI interaction. The absence of an effect may be due to a small sample resulting in limited power to find an effect, the lack of change in physical activity over time in terms of volume and/or intensity, or a combination of both reduced power and no change in physical activity. Study 2a was a cross-sectional study using cognitively unimpaired older adults to examine the individual and interactive effects of LCn3 and PAI on executive function. Several possible explanations for the absence of an effect were identified. From this consideration of alternative explanations it was hypothesised that post-onset interventions with LCn3 either alone or in interation with self-reported physical activity may not be beneficial in MCI. Thus executive function responses to the individual and interactive effects of physical activity and LCn3 were examined in a sample of older male and female adults without cognitive impairment (n = 50). A further aim of study 2a was to operationalise executive function using principal components analysis (PCA) of several executive function tests. This approach was used firstly as a data reduction technique to overcome the task impurity problem, and secondly to examine the executive function structure of the sample for evidence of de-differentiation. Two executive function components were identified as a result of the PCA (EF 1 and EF 2). However, EPA, DHA, the PAI, or the EPA by PAI or DHA by PAI interactions did not account for any variance in the executive function components in subsequent hierarchical multiple regressions. Study 2b was an exploratory correlational study designed to explore the possibility that immune and endocrine system biomarkers may act as mediators of the relationship between LCn3, PAI, the interaction between LCn3 and PAI, and executive functions. Insulin-like growth factor-1 (IGF-1), an endocrine system growth hormone, and interleukin-6 (IL-6) an immune system cytokine involved in the acute inflammatory response, have both been shown to affect cognition including executive functions. Moreover, IGF-1 and IL-6 have been shown to be antithetical in so far as chronically increased IL-6 has been associated with reduced IGF-1 levels, a relationship that has been linked to age related morbidity. Further, physical activity and LCn3 have been shown to modulate levels of both IGF-1 and IL-6. Thus, it is possible that the cognitive enhancing effects of LCn3, physical activity or their interaction are mediated by changes in the balance between IL-6 and IGF-1. Partial and non-parametric correlations were conducted in a subsample of participants from Study 2a (n = 13) to explore these relationships. Correlations of interest did not reach significance; however, the coefficients were quite large for several relationships suggesting studies with larger samples may be warranted. In summary, the current program of research found some evidence supporting an interaction between EPA, not DHA, and higher energy expenditure via physical activity in differentiating between older adults with and without MCI. However, a RCT examining executive function in older adults with MCI found no support for increasing EPA or DHA while maintaining current levels of energy expenditure. Furthermore, a cross-sectional study examining executive function in older adults without MCI found no support for better executive function performance as a function of increased EPA or DHA consumption, greater energy expenditure via physical activity or an interaction between physical activity and either EPA or DHA. Finally, an examination of endocrine and immune system biomarkers revealed promising relationships in terms of executive function in non-MCI older adults particularly with respect to LCn3 and physical activity. Taken together, these findings demonstrate a potential benefit of increasing physical activity and LCn3 consumption, particularly EPA, in mitigating the risk of developing MCI. In contrast, no support was found for a benefit to executive function as a result of increased physical activity, LCn3 consumption or an interaction between physical activity and LCn3, in participants with and without MCI. These results are discussed with reference to previous findings in the literature including possible limitations and opportunities for future research.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Banana is one of the world’s most popular fruit crops and Sukali Ndizi is the most popular dessert banana in the East African region. Like other banana cultivars, Sukali Ndizi is threatened by several constraints, of which the Fusarium wilt disease is the most destructive. Fusarium wilt is caused by a soil-borne fungus, Fusarium oxysporum f.sp. cubense (Foc). No effective control strategy currently exists for this disease and although disease resistance exists in some banana cultivars, introducing resistance into commercial cultivars by conventional breeding is difficult because of low fertility. Considering that conventional breeding generates hybrids with additional undesirable traits, transformation is the most suitable way of introducing resistance in the banana genome. The success of this strategy depends on the availability of genes for genetic transformation. Recently, a novel strategy involving the expression of anti-apoptosis genes in plants was shown to result in resistance against several necrotrophic fungi, including Foc race 1 in banana cultivar Lady Finger. This thesis explores the potential of a plant-codon optimised nematode anti-apoptosis gene (Mced9) to provide resistance against Foc race 1 in dessert banana cultivar Sukali Ndizi. Agrobacterium-mediated transformation was used to transform embryogenic cell suspension of Sukali Ndizi with plant expression vector pYC11, harbouring maize ubiquitin promoter driven Mced9 gene and nptII as a plant selection marker. A total of 42 independently transformed lines were regenerated and characterized. The transgenic lines were multiplied, infected and evaluated for resistance to Foc race 1 in a small pot bioassay. The pathogenicity of the Ugandan Foc race 1 isolate used for infection was pre-determined and the spore concentration was standardised for consistent infection and symptom development. This process involved challenging tissue culture plants of Sukali Ndizi, a Foc race 1 susceptible cultivar and Nakinyika, an East African Highland cultivar known to be resistant to Foc race 1, with Fusarium inoculum and observing external and internal disease symptom development. Rhizome discolouration symptoms were the best indicators of Fusarium wilt with yellowing being an early sign of disease. Three transgenic lines were found to show significantly less disease severities compared to the wild-type control plants after 13 weeks of infection, indicating that Mced9 has the potential to provide tolerance to Fusarium wilt in Sukali Ndizi.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Since the first oil crisis in 1974, economic reasons placed energy saving among the top priorities in most industrialised countries. In the decades that followed, another, equally strong driver for energy saving emerged: climate change caused by anthropogenic emissions, a large fraction of which result from energy generation. Intrinsically linked to energy consumption and its related emissions is another problem: indoor air quality. City dwellers in industrialised nations spend over 90% of their time indoors and exposure to indoor pollutants contributes to ~2.6% of global burden of disease and nearly 2 million premature deaths per year1. Changing climate conditions, together with human expectations of comfortable thermal conditions, elevates building energy requirements for heating, cooling, lighting and the use of other electrical equipment. We believe that these changes elicit a need to understand the nexus between energy consumption and its consequent impact on indoor air quality in urban buildings. In our opinion the key questions are how energy consumption is distributed between different building services, and how the resulting pollution affects indoor air quality. The energy-pollution nexus has clearly been identified in qualitative terms; however the quantification of such a nexus to derive emissions or concentrations per unit energy consumption is still weak, inconclusive and requires forward thinking. Of course, various aspects of energy consumption and indoor air quality have been studied in detail separately, but in-depth, integrated studies of the energy-pollution nexus are hard to come by. We argue that such studies could be instrumental in providing sustainable solutions to maintain the trade-off between the energy efficiency of buildings and acceptable levels of air pollution for healthy living.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Chlamydia is responsible for a wide range of diseases with enormous global economic and health burden. As the majority of chlamydial infections are asymptomatic, a vaccine has greatest potential to reduce infection and disease prevalence. Protective immunity against Chlamydia requires the induction of a mucosal immune response, ideally, at the multiple sites in the body where an infection can be established. Mucosal immunity is most effectively stimulated by targeting vaccination to the epithelium, which is best accomplished by direct vaccine application to mucosal surfaces rather than by injection. The efficacy of needle-free vaccines however is reliant on a powerful adjuvant to overcome mucosal tolerance. As very few adjuvants have proven able to elicit mucosal immunity without harmful side effects, there is a need to develop non-toxic adjuvants or safer ways to administered pre-existing toxic adjuvants. In the present study we investigated the novel non-toxic mucosal adjuvant CTA1-DD. The immunogenicity of CTA1-DD was compared to our "gold-standard" mucosal adjuvant combination of cholera toxin (CT) and cytosine-phosphate-guanosine oligodeoxynucleotide (CpG-ODN). We also utilised different needle-free immunisation routes, intranasal (IN), sublingual (SL) and transcutaneous (TC), to stimulate the induction of immunity at multiple mucosal surfaces in the body where Chlamydia are known to infect. Moreover, administering each adjuvant by different routes may also limit the toxicity of the CT/CpG adjuvant, currently restricted from use in humans. Mice were immunised with either adjuvant together with the chlamydial major outer membrane protein (MOMP) to evaluate vaccine safety and quantify the induction of antigen-specific mucosal immune responses. The level of protection against infection and disease was also assessed in vaccinated animals following a live genital or respiratory tract infectious challenge. The non-toxic CTA1-DD was found to be safe and immunogenic when delivered via the IN route in mice, inducing a comparable mucosal response and level of protective immunity against chlamydial challenge to its toxic CT/CpG counterpart administered by the same route. The utilisation of different routes of immunisation strongly influenced the distribution of antigen-specific responses to distant mucosal surfaces and also abrogated the toxicity of CT/CpG. The CT/CpG-adjuvanted vaccine was safe when administered by the SL and TC routes and conferred partial immunity against infection and pathology in both challenge models. This protection was attributed to the induction of antigen-specific pro-inflammatory cellular responses in the lymph nodes regional to the site of infection and rather than in the spleen. Development of non-toxic adjuvants and effective ways to reduce the side effects of toxic adjuvants has profound implications for vaccine development, particularly against mucosal pathogens like Chlamydia. Interestingly, we also identified two contrasting vaccines in both infection models capable of preventing infection or pathology exclusively. This indicated that the development of pathology following an infection of vaccinated animals was independent of bacterial load and was instead the result of immunopathology, potentially driven by the adaptive immune response generated following immunisation. While both vaccines expressed high levels of interleukin (IL)-17 cytokines, the pathology protected group displayed significantly reduced expression of corresponding IL-17 receptors and hence an inhibition of signalling. This indicated that the balance of IL-17-mediated responses defines the degree of protection against infection and tissue damage generated following vaccination. This study has enabled us to better understand the immune basis of pathology and protection, necessary to design more effective vaccines.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: Despite increasing diversity in pathways to adulthood, choices available to young people are influenced by environmental, familial and individual factors, namely access to socioeconomic resources, family support and mental and physical health status. Young people from families with higher socioeconomic position (SEP) are more likely to pursue tertiary education and delay entry to adulthood, whereas those from low socioeconomic backgrounds are less likely to attain higher education or training, and more likely to partner and become parents early. The first group are commonly termed ‘emerging adults’ and the latter group ‘early starters’. Mental health disorders during this transition can seriously disrupt psychological, social and academic development as well as employment prospects. Depression, anxiety and most substance use disorders have early onset during adolescence and early adulthood with approximately three quarters of lifetime psychiatric disorders having emerged by 24 years of age. Aims: This thesis aimed to explore the relationships between mental health, sociodemographic factors and family functioning during the transition to adulthood. Four areas were investigated: 1) The key differences between emerging adults and ‘early starters’, were examined and focused on a series of social, economic, and demographic factors as well as DSM-IV diagnoses; 2) Methodological issues associated with the measurement of depression and anxiety in young adults were explored by comparing a quantitative measure of symptoms of anxiety and depression (Achenbach’s YSR and YASR internalising scales) with DSM-IV diagnosed depression and anxiety. 3) The association between family SEP and DSM-IV depression and anxiety was examined in relation to the different pathways to adulthood. 4) Finally, the association between pregnancy loss, abortion and miscarriage, and DSM-IV diagnoses of common psychiatric disorders was assessed in young women who reported early parenting, experiencing a pregnancy loss, or who had never been pregnant. Methods: Data were taken from the Mater University Study of Pregnancy (MUSP), a large birth cohort started in 1981 in Brisbane, Australia. 7223 mothers and their children were assessed five times, at 6 months, 5, 14 and 21 years after birth. Over 3700 young adults, aged 18 to 23 years, were interviewed at the 21-year phase. Respondents completed an extensive series of self-reported questionnaires and a computerised structured psychiatric interview. Three outcomes were assessed at the 21-year phase. Mental health disorders diagnosed by a computerised structured psychiatric interview (CIDI-Auto), the prevalence of DSM-IV depression, anxiety and substance use disorders within the previous 12-month, during the transition (between ages of 18 and 23 years) or lifetime were examined. The primary outcome “current stage in the transition to adulthood” was developed using a measure conceptually constructed from the literature. The measure was based on important demographic markers, and these defined four independent groups: emerging adults (single with no children and living with parents), and three categories of ‘early starter’, singles (with no children or partner, living independently), those with a partner (married or cohabitating but without children) and parents. Early pregnancy loss was assessed using a measure that also defined four independent groups and was based on pregnancy outcomes in the young women This categorised the young women into those who were never pregnant, women who gave birth to a live child, and women who reported some form of pregnancy loss, either an abortion or a spontaneous miscarriage. A series of analyses were undertaken to test the study aims. Potential confounding and mediating factors were prospectively measured between the child’s birth and the 21-year phase. Binomial and multinomial logistic regression was used to estimate the risk of relevant outcomes, and the associations were reported as odds ratios (OR) and 95% confidence intervals (95%CI). Key findings: The thesis makes a number of important contributions to our understanding of the transition to adulthood, particularly in relation to the mental health consequences associated with different pathways. Firstly, findings from the thesis clearly showed that young people who parented or partnered early fared worse across most of the economic and social factors as well as the common mental disorders when compared to emerging adults. That is, young people who became early parents were also more likely to experience recent anxiety (OR=2.0, 95%CI 1.5-2.8) and depression (OR=1.7, 95%CI 1.1-2.7) than were emerging adults after taking into account a range of confounding factors. Singles and those partnering early also had higher rates of lifetime anxiety and depression than emerging adults. Young people who partnered early, but were without children, had decreased odds of recent depression; this may be due to the protective effect of early marriage against depression. It was also found that young people who form families early had an increased risk of cigarette smoking (parents OR=3.7, 95%CI 2.9-4.8) compared to emerging adults, but not heavy alcohol (parents OR=0.4, 95%CI 0.3-0.6) or recent illicit drug use. The high rates of cigarette smoking and tobacco use disorders in ‘early starters’ were explained by common risk factors related to early adversity and lower SEP. Having a child and early marriage may well function as a ‘turning point’ for some young people, it is not clear whether this is due to a conscious decision to disengage from a previous ‘substance using’ lifestyle or simply that they no longer have the time to devote to such activities because of child caring. In relation to the methodological issues associated with assessing common mental disorders in young adults, it was found that although the Achenbach empirical internalising scales successfully predicted both later DSM-IV depression (YSR OR=2.3, 95%CI 1.7-3.1) and concurrently diagnosed depression (YASR OR=6.9, 95%CI 5.0- 9.5) and anxiety (YASR OR=5.1, 95%CI 3.8- 6.7), the scales discriminated poorly between young people with or without DSM-IV diagnosed mood disorder. Sensitivity values (the proportion of true positives) for the internalising scales were surprisingly low. Only a third of young people with current DSM-IV depression (range for each of the scales was between 34% to 42%) were correctly identified as cases by the YASR internalising scales, and only a quarter with current anxiety disorder (range of 23% to 31%) were correctly identified. Also, use of the DSM-oriented scales increased sensitivity only marginally (for depression between 2-8%, and anxiety between 2-6%) above the standard Achenbach scales. This is despite the fact that the DSM-oriented scales were originally developed to overcome the poor prediction of DSM-IV diagnoses by the Achenbach scales. The internalising scales, both standard and DSM-oriented, were much more effective at identifying young people with comorbid depression and anxiety, with OR’s 10.1 to 21.7 depending on the internalising scale used. SEP is an important predictor of both an early transition to adulthood and the experience of anxiety during that time Family income during adolescence was a strong predictor of early parenting and partnering before age 24 but not early independent living. Compared to families in the upper quintile, young people from families with low income were nearly twice as likely to live with a partner and four times more likely to become parents (OR ranged from 2.6 to 4.0). This association remained after adjusting for current employment and education level. Children raised in low income families were 30% more likely to have an anxiety disorder (OR=1.3, 95%CI 0.9-1.9), but not depression, as young adults when compared to children from wealthier families. Emerging adults and ‘early starters’ from low income families did not differ in their likelihood of having a later anxiety disorder. Young women reporting a pregnancy loss had nearly three times the odds of experiencing a lifetime illicit drug disorder (excluding cannabis) [abortion OR=3.6, 95%CI 2.0-6.7 and miscarriage OR=2.6, 95%CI 1.2-5.4]. Abortion was associated with alcohol use disorder (OR=2.1, 95%CI 1.3- 3.5) and 12-month depression (OR=1.9, 95%CI 1.1- 3.1). These finding suggest that the association identified by Fergusson et al between abortion and later psychiatric disorders in young women may be due to pregnancy loss and not to abortion, per se. Conclusion: Findings from this thesis support the view that young people who parent or partner early have a greater burden of depression and anxiety when compared to emerging adults. As well, young women experiencing pregnancy loss, from either abortion or miscarriage, are more likely to experience depression and anxiety than are those who give birth to a live infant or who have never been pregnant. Depression, anxiety and substance use disorders often go unrecognised and untreated in young people; this is especially true in young people from lower SEP. Early identification of these common mental health disorders is important, as depression and anxiety experienced during the transition to adulthood have been found to seriously disrupt an individual’s social, educational and economic prospects in later life.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Aims and objectives. To examine Chinese cancer patients’ fatigue self-management, including the types of self-management behaviours used, their confidence in using these behaviours, the degree of relief obtained and the factors associated with patients’ use of fatigue self-management behaviours. Background. Fatigue places significant burden on patients with cancer undergoing chemotherapy. While some studies have explored fatigue self-management in Western settings, very few studies have explored self-management behaviours in China. Design. Cross-sectional self- and/or interviewer-administered survey. Methods. A total of 271 participants with self-reported fatigue in the past week were recruited from a specialist cancer hospital in south-east China. Participants completed measures assessing the use of fatigue self-management behaviours, corresponding self-efficacy, perceived relief levels plus items assessing demographic characteristics, fatigue experiences, distress and social support. Results. A mean of 4_94 (_2_07; range 1–10) fatigue self-management behaviours was reported. Most behaviours were rated as providing moderate relief and were implemented with moderate self-efficacy. Regression analyses identified that having more support from one’s neighbourhood and better functional status predicted the use of a greater number of self-management behaviours. Separate regression analyses identified that greater neighbourhood support predicted greater relief from ‘activity enhancement behaviours’ and that better functional status predicted greater relief from ‘rest and sleep behaviours’. Higher self-efficacy scores predicted greater relief from corresponding behaviours. Conclusions. A range of fatigue self-management behaviours were initiated by Chinese patients with cancer. Individual, condition and environmental factors were found to influence engagement in and relief from fatigue self-managementbehaviours. Relevance to clinical practice. Findings highlight the need for nurses to explore patients’ use of fatigue self-management behaviours and the effectiveness of these behaviours in reducing fatigue. Interventions that improve patients’ self-efficacy and neighbourhood supports have the potential to improve outcomes from fatigue self-management behaviours.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The findings presented in this paper are part of a research project designed to provide a preliminary indication of the support needs of postdiagnosis women with breast cancer in remote and isolated areas in Queensland. This discussion will present data that focuses on the women’s expressed personal concerns. For participants in this research a diagnosis of breast cancer involves a confrontation with their own mortality and the possibility of a reduced life span. This is a definite life crisis, creating shock and needing considerable adjustment. Along with these generic issues the participants also articulated significant issues in relation to their experience as women in a rural setting. These concerns centred around worries about how their partner and families cope during their absences for treatment, the additional burden on the family of having to cope with running the property or farm during the participant’s absence or illness, added financial strain brought about by the cost of travel for treatment, maintenance of properties during absences, and problems created by time off from properties or self-employment. These findings accord with other reports of health and welfare services for rural Australian and the generic literature on psycho-oncology studies of breast cancer.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Despite the numerous reports of difficulties experienced by health care providers in providing psychosocial care to terminally ill patients and their families, few studies have yet been undertaken to examine the effectiveness of different educational approaches to addressing these issues. The aim of this paper is to describe a programme of professional development for palliative care nurses, which is currently being offered to 181 registered nurses in Queensland, Australia. The programme is based on an action learning model and is designed to facilitate processes of reflection and peer consultation. In Part One of this paper, a review of this literature is presented to provide the background and rationale for the programme design. Details of the research programme developed to evaluate the programme will be presented in Part Two of this paper, which is to be published in the next issue of this Journal. Surveys of health professionals suggest that the demands of working with terminally ill patients are associated with a great deal of stress (Beaton and Degner 1990, Seale 1992, Vachon 1995), and emotional burden, as they are confronted with their patients' physical and emotional suffering over extended periods of time (Ullrich and Fitzgerald 1990). Key areas of concern (Lyons 1988, Bramwell 1989, Seale 1992, Copp and Dunn 1993, Wilkinson 1995) include: * Handling questions and conversations with dying patients. * Dealing with ethical and moral issues. * Handling emotions. * Giving hope. * Providing spiritual care and bereavement support. * Confronting team communication problems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background Falls are one of the most frequently occurring adverse events that impact upon the recovery of older hospital inpatients. Falls can threaten both immediate and longer-term health and independence. There is need to identify cost-effective means for preventing falls in hospitals. Hospital-based falls prevention interventions tested in randomized trials have not yet been subjected to economic evaluation. Methods Incremental cost-effectiveness analysis was undertaken from the health service provider perspective, over the period of hospitalization (time horizon) using the Australian Dollar (A$) at 2008 values. Analyses were based on data from a randomized trial among n = 1,206 acute and rehabilitation inpatients. Decision tree modeling with three-way sensitivity analyses were conducted using burden of disease estimates developed from trial data and previous research. The intervention was a multimedia patient education program provided with trained health professional follow-up shown to reduce falls among cognitively intact hospital patients. Results The short-term cost to a health service of one cognitively intact patient being a faller could be as high as A$14,591 (2008). The education program cost A$526 (2008) to prevent one cognitively intact patient becoming a faller and A$294 (2008) to prevent one fall based on primary trial data. These estimates were unstable due to high variability in the hospital costs accrued by individual patients involved in the trial. There was a 52% probability the complete program was both more effective and less costly (from the health service perspective) than providing usual care alone. Decision tree modeling sensitivity analyses identified that when provided in real life contexts, the program would be both more effective in preventing falls among cognitively intact inpatients and cost saving where the proportion of these patients who would otherwise fall under usual care conditions is at least 4.0%. Conclusions This economic evaluation was designed to assist health care providers decide in what circumstances this intervention should be provided. If the proportion of cognitively intact patients falling on a ward under usual care conditions is 4% or greater, then provision of the complete program in addition to usual care will likely both prevent falls and reduce costs for a health service.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Migraineurs experience significant decline in functioning and productivity, which in turn translates into diminished quality of life and a major economic burden on society at large [1]. Although current research has better elucidated the pathophysiology underlying migraine, the exact etiology remains to be defined. Biochemical factors that could potentially disrupt the vascular endothelial function, leading to cortical spreading depression that can activate and affect the trigeminovascular system, are primary candidates for involvement in migraine pathophysiology [2]. The current mechanisms explaining the pathogenesis behind migraine continue to evolve, but theories of variability in cortical excitability, neuronal dysregulation and neurotransmitter/receptor activation are all important and potentially amenable to nutraceutical manipulation [3]. As our knowledge about migraine pathogenesis expands, our current understanding of the complex relationships between pharmacological doses, cofactor and hormone interactions, and neural and pain pathway activities will also advance, creating new avenues for research and migraine treatment development [3].