854 resultados para Brown Band Disease, Maldives, prevalence, host range, coral diseases
Resumo:
Background: Chronic diseases including type 2 diabetes are a leading cause of morbidity and mortality in midlife and older Australian women. There are a number of modifiable risk factors for type 2 diabetes and other chronic diseases including smoking, nutrition, physical activity and overweight and obesity. Little research has been conducted in the Australian context to explore the perceived barriers to health promotion activities in midlife and older Australian women with a chronic disease. Aims: The primary aim of this study was to explore women’s perceived barriers to health promotion activities to reduce modifiable risk factors, and the relationship of perceived barriers to smoking behaviour, fruit and vegetable intake, physical activity and body mass index. A secondary aim of this study was to investigate nurses’ perceptions of the barriers to action for women with a chronic disease, and to compare those perceptions with those of the women. Methods: The study was divided into two phases where Phase 1 was a cross sectional survey of women, aged over 45 years with type 2 diabetes who were attending Diabetes clinics in the Primary and Community Health Service of the Metro North Health Service District of Queensland Health (N = 22). The women were a subsample of women participating in a multi-model lifestyle intervention, the ‘Reducing Chronic Disease among Adult Australian Women’ project. Phase 2 of the study was a cross sectional online survey of nurses working in Primary and Community Health Service in the Metro North Health Service District of Queensland Health (N = 46). Pender’s health promotion model was used as the theoretical framework for this study. Results: Women in this study had an average total barriers score of 32.18 (SD = 9.52) which was similar to average scores reported in the literature for women with a range of physical disabilities and illnesses. The leading five barriers for this group of women were: concern about safety; too tired; not interested; lack of information about what to do; with lack of time and feeling I can’t do things correctly the equal fifth ranked barriers. In this study there was no statistically significant difference in average total barriers scores between women in the intervention group and those is the usual care group of the parent study. There was also no significant relationship between the women’s socio-demographic variables and lifestyle risk factors and their level of perceived barriers. Nurses in the study had an average total barriers score of 44.48 (SD = 6.24) which was higher than all other average scores reported in the literature. The leading five barriers that nurses perceived were an issue for women with a chronic disease were: lack of time and interferes with other responsibilities the leading barriers; embarrassment about appearance; lack of money; too tired and lack of support from family and friends. There was no significant relationship between the nurses’ sociodemographic and nursing variables and the level of perceived barriers. When comparing the results of women and nurses in the study there was a statistically significant difference in the median total barriers score between the groups (p < 0.001), where the nurses perceived the barriers to be higher (Md = 43) than the women (Md = 33). There was also a significant difference in the responses to the individual barriers items in fifteen of the eighteen items (p < 0.002). Conclusion: Although this study is limited by a small sample size, it contributes to understanding the perception of midlife and older women with a chronic disease and also the perception of nurses, about the barriers to healthy lifestyle activities that women face. The study provides some evidence that the perceptions of women and nurses may differ and argues that these differences may have significant implications for clinical practice. The study recommends a greater emphasis on assessing and managing perceived barriers to health promotion activities in health education and policy development and proposes a conceptual model for understanding perceived barriers to action.
Resumo:
Loss of the short arm of chromosome 1 is frequently observed in many tumor types, including melanoma. We recently localized a third melanoma susceptibility locus to chromosome band 1p22. Critical recombinants in linked families localized the gene to a 15-Mb region between D1S430 and D1S2664. To map the locus more finely we have performed studies to assess allelic loss across the region in a panel of melanomas from 1p22-linked families, sporadic melanomas, and melanoma cell lines. Eighty percent of familial melanomas exhibited loss of heterozygosity (LOH) within the region, with a smallest region of overlapping deletions (SRO) of 9 Mb between D1S207 and D1S435. This high frequency of LOH makes it very likely that the susceptibility locus is a tumor suppressor. In sporadic tumors, four SROs were defined. SRO1 and SRO2 map within the critical recombinant and familial tumor region, indicating that one or the other is likely to harbor the susceptibility gene. However, SRO3 may also be significant because it overlaps with the markers with the highest 2-point LOD score (D1S2776), part of the linkage recombinant region, and the critical region defined in mesothelioma. The candidate genes PRKCL2 and GTF2B, within SRO2, and TGFBR3, CDC7, and EVI5, in a broad region encompassing SRO3, were screened in 1p22-linked melanoma kindreds, but no coding mutations were detected. Allelic loss in melanoma cell lines was significantly less frequent than in fresh tumors, indicating that this gene may not be involved late in progression, such as in overriding cellular senescence, necessary for the propagation of melanoma cells in culture.
Resumo:
An SEI metapopulation model is developed for the spread of an infectious agent by migration. The model portrays two age classes on a number of patches connected by migration routes which are used as host animals mature. A feature of this model is that the basic reproduction ratio may be computed directly, using a scheme that separates topography, demography, and epidemiology. We also provide formulas for individual patch basic reproduction numbers and discuss their connection with the basic reproduction ratio for the system. The model is applied to the problem of spatial spread of bovine tuberculosis in a possum population. The temporal dynamics of infection are investigated for some generic networks of migration links, and the basic reproduction ratio is computed—its value is not greatly different from that for a homogeneous model. Three scenarios are considered for the control of bovine tuberculosis in possums where the spatial aspect is shown to be crucial for the design of disease management operations
Resumo:
Objective: To comprehensively measure the burden of hepatitis B, liver cirrhosis and liver cancer in Shandong province, using disability-adjusted life years (DALYs) to estimate the disease burden attribute to hepatitis B virus (HBV)infection. Methods: Based on the mortality data of hepatitis B, liver cirrhosis and liver cancer derived from the third National Sampling Retrospective Survey for Causes of Death during 2004 and 2005, the incidence data of hepatitis B and the prevalence and the disability weights of liver cancer gained from the Shandong Cancer Prevalence Sampling Survey in 2007, we calculated the years of life lost (YLLs), years lived with disability (YLDs) and DALYs of three diseases following the procedures developed for the global burden of disease (GBD) study to ensure the comparability. Results: The total burden for hepatitis B, liver cirrhosis and liver cancer were 211 616 (39 377 YLLs and 172 239 YLDs), 16 783 (13 497 YLLs and 3286 YLDs) and 247 795 (240 236 YLLs and 7559 YLDs) DALYs in 2005 respectively, and men were 2.19, 2.36 and 3.16 times as that for women, respectively in Shandong province. The burden for hepatitis B was mainly because of disability (81.39%). However, most burden on liver cirrhosis and liver cancer were due to premature death (80.42% and 96.95%). The burden of each patient related to hepatitis B, liver cirrhosis and liver cancer were 4.8, 13.73 and 11.11 respectively. Conclusion: Hepatitis B, liver cirrhosis and liver cancer caused considerable burden to the people living in Shandong province, indicating that the control of hepatitis B virus infection would bring huge potential benefits.
Resumo:
Increasing resistance of rabbits to myxomatosis in Australia has led to the exploration of Rabbit Haemorrhagic Disease, also called Rabbit Calicivirus Disease (RCD) as a possible control agent. While the initial spread of RCD in Australia resulted in widespread rabbit mortality in affected areas, the possible population dynamic effects of RCD and myxomatosis operating within the same system have not been properly explored. Here we present early mathematical modelling examining the interaction between the two diseases. In this study we use a deterministic compartment model, based on the classical SIR model in infectious disease modelling. We consider, here, only a single strain of myxomatosis and RCD and neglect latent periods. We also include logistic population growth, with the inclusion of seasonal birth rates. We assume there is no cross-immunity due to either disease. The mathematical model allows for the possibility of both diseases to be simultaneously present in an individual, although results are also presented for the case where co infection is not possible, since co-infection is thought to be rare and questions exist as to whether it can occur. The simulation results of this investigation show that it is a crucial issue and should be part of future field studies. A single simultaneous outbreak of RCD and myxomatosis was simulated, while ignoring natural births and deaths, appropriate for a short timescale of 20 days. Simultaneous outbreaks may be more common in Queensland. For the case where co-infection is not possible we find that the simultaneous presence of myxomatosis in the population suppresses the prevalence of RCD, compared to an outbreak of RCD with no outbreak of myxomatosis, and thus leads to a less effective control of the population. The reason for this is that infection with myxomatosis removes potentially susceptible rabbits from the possibility of infection with RCD (like a vaccination effect). We found that the reduction in the maximum prevalence of RCD was approximately 30% for an initial prevalence of 20% of myxomatosis, for the case where there was no simultaneous outbreak of myxomatosis, but the peak prevalence was only 15% when there was a simultaneous outbreak of myxomatosis. However, this maximum reduction will depend on other parameter values chosen. When co-infection is allowed then this suppression effect does occur but to a lesser degree. This is because the rabbits infected with both diseases reduces the prevalence of myxomatosis. We also simulated multiple outbreaks over a longer timescale of 10 years, including natural population growth rates, with seasonal birth rates and density dependent(logistic) death rates. This shows how both diseases interact with each other and with population growth. Here we obtain sustained outbreaks occurring approximately every two years for the case of a simultaneous outbreak of both diseases but without simultaneous co-infection, with the prevalence varying from 0.1 to 0.5. Without myxomatosis present then the simulation predicts RCD dies out quickly without further introduction from elsewhere. With the possibility of simultaneous co-infection of rabbits, sustained outbreaks are possible but then the outbreaks are less severe and more frequent (approximately yearly). While further model development is needed, our work to date suggests that: 1) the diseases are likely to interact via their impacts on rabbit abundance levels, and 2) introduction of RCD can suppress myxomatosis prevalence. We recommend that further modelling in conjunction with field studies be carried out to further investigate how these two diseases interact in the population.
Resumo:
Early-stage treatments for osteoarthritis are attracting considerable interest as a means to delay, or avoid altogether, the pain and lack of mobility associated with late-stage disease, and the considerable burden that it places on the community. With the development of these treatments comes a need to assess the tissue to which they are applied, both in trialling of new treatments and as an aid to clinical decision making. Here, we measure a range of mechanical indentation, ultrasound and near-infrared spectroscopy parameters in normal and osteoarthritic bovine joints in vitro to describe the role of different physical phenomena in disease progression, using this as a basis to investigate the potential value of the techniques as clinical tools. Based on 72 samples we found that mechanical and ultrasound parameters showed differences between fibrillated tissue, macroscopically normal tissue in osteoarthritic joints, and normal tissue, yet did were unable to differentiate degradation beyond that which was visible to the naked eye. Near-infrared spectroscopy showed a clear progression of degradation across the visibly normal osteoarthritic joint surface and as such, was the only technique considered useful for clinical application.
Resumo:
This study investigated potential palaeoclimate proxies provided by rare earth element (REE) geochemistry in speleothems and in clay mineralogy of cave sediments. Speleothem and sediment samples were collected from a series of cave fill deposits that occurred with rich vertebrate fossil assemblages in and around Mount Etna National Park, Rockhampton (central coastal Queensland). The fossil deposits range from Plio- Pleistocene to Holocene in age (based on uranium/thorium dating) and appear to represent depositional environments ranging from enclosed rainforest to semi-arid grasslands. Therefore, the Mount Etna cave deposits offer the perfect opportunity to test new palaeoclimate tools as they include deposits that span a known significant climate shift on the basis of independent faunal data. The first section of this study investigates the REE distribution of the host limestone to provide baseline geochemistry for subsequent speleothem investigations. The Devonian Mount Etna Beds were found to be more complex than previous literature had documented. The studied limestone massif is overturned, highly recrystallised in parts and consists of numerous allochthonous blocks with different spatial orientations. Despite the complex geologic history of the Mount Etna Beds, Devonian seawater-like REE patterns were recovered in some parts of the limestone and baseline geochemistry was determined for the bulk limestone for comparison with speleothem REE patterns. The second part of the study focused on REE distribution in the karst system and the palaeoclimatic implications of such records. It was found that REEs have a high affinity for calcite surfaces and that REE distributions in speleothems vary between growth bands much more than along growth bands, thus providing a temporal record that may relate to environmental changes. The morphology of different speleothems (i.e., stalactites, stalagmites, and flowstones) has little bearing on REE distributions provided they are not contaminated with particulate fines. Thus, baseline knowledge developed in the study suggested that speleothems were basically comparable for assessing palaeoclimatically controlled variations in REE distributions. Speleothems from rainforest and semi-arid phases were compared and it was found that there are definable differences in REE distribution that can be attributed to climate. In particular during semiarid phases, total REE concentration decreased, LREE became more depleted, Y/Ho increased, La anomalies were more positive and Ce anomalies were more negative. This may reflect more soil development during rainforest phases and more organic particles and colloids, which are known to transport REEs, in karst waters. However, on a finer temporal scale (i.e. growth bands) within speleothems from the same climate regime, no difference was seen. It is suggested that this may be due to inadequate time for soil development changes on the time frames represented by differences in growth band density. The third part of the study was a reconnaissance investigation focused on mineralogy of clay cave sediments, illite/kaolinite ratios in particular, and the potential palaeoclimatic implications of such records. Although the sample distribution was not optimal, the preliminary results suggest that the illite/kaolinite ratio increased during cold and dry intervals, consistent with decreased chemical weathering during those times. The study provides a basic framework for future studies at differing latitudes to further constrain the parameters of the proxy. The identification of such a proxy recorded in cave sediment has broad implications as clay ratios could potentially provide a basic local climate proxy in the absence of fossil faunas and speleothem material. This study suggests that REEs distributed in speleothems may provide information about water throughput and soil formation, thus providing a potential palaeoclimate proxy. It highlights the importance of understanding the host limestone geochemistry and broadens the distribution and potential number of cave field sites as palaeoclimate information no longer relies solely on the presence of fossil faunas and or speleothems. However, additional research is required to better understand the temporal scales required for the proxies to be recognised.
Resumo:
Background Depression is a major public health problem worldwide and is currently ranked second to heart disease for years lost due to disability. For many decades, international research has found that depressive symptoms occur more frequently among low socioeconomic (SES) individuals than their more-advantaged peers. However, the reasons as to why those of low socioeconomic groups suffer more depressive symptoms are not well understood. Studies investigating the prevalence of depression and its association with SES emanate largely from developed countries, with little research among developing countries. In particular, there is a serious dearth of research on depression and no investigation of its association with SES in Vietnam. The aims of the research presented in this Thesis are to: estimate the prevalence of depressive symptoms among Vietnamese adults, examine the nature and extent of the association between SES and depression and to elucidate causal pathways linking SES to depressive symptoms Methods The research was conducted between September 2008 and November 2009 in Hue city in central Vietnam and used a combination of qualitative (in-depth interviews) and quantitative (survey) data collection methods. The qualitative study contributed to the development of the theoretical model and to the refinement of culturally-appropriate data collection instruments for the quantitative study. The main survey comprised a cross-sectional population–based survey with randomised cluster sampling. A sample of 1976 respondents aged between 25-55 years from ten randomly-selected residential zones (quarters) of Hue city completed the questionnaire (response rate 95.5%). Measures SES was classified using three indicators: education, occupation and income. The Center for Epidemiologic Studies-Depression (CES-D) scale was used to measure depressive symptoms (range0-51, mean=11.0, SD=8.5). Three cut-off points for the CES-D scores were applied: ‘at risk for clinical depression’ (16 or above), ‘depressive symptoms’ (above 21) and ‘depression’ (above 25). Six psychosocial indicators: life time trauma, chronic stress, recent life events, social support, self esteem, and mastery were hypothesized to mediate the association between SES and depressive symptoms. Analyses The prevalence of depressive symptoms were analysed using bivariate analyses. The multivariable analytic phase comprised of ordinary least squares regression, in accordance with Baron and Kenny’s three-step framework for mediation modeling. All analyses were adjusted for a range of confounders, including age, marital status, smoking, drinking and chronic diseases and the mediation models were stratified by gender. Results Among these Vietnamese adults, 24.3% were at or above the cut-off for being ‘at risk for clinical depression’, 11.9% were classified as having depressive symptoms and 6.8% were categorised as having depression. SES was inversely related to depressive symptoms: the least educated those with low occupational status or with the lowest incomes reported more depressive symptoms. Socioeconomicallydisadvantaged individuals were more likely to report experiencing stress (life time trauma, chronic stress or recent life events), perceived less social support and reported fewer personal resources (self esteem and mastery) than their moreadvantaged counterparts. These psychosocial resources were all significantly associated with depressive symptoms independent of SES. Each psychosocial factor showed a significant mediating effect on the association between SES and depressive symptoms. This was found for all measures of SES, and for males and females. In particular, personal resources (mastery, self esteem) and chronic stress accounted for a substantial proportion of the variation in depressive symptoms between socioeconomic groups. Social support and recent life events contributed modestly to socioeconomic differences in depressive symptoms, whereas lifetime trauma contributed the least to these inequalities. Conclusion This is the first known study in Vietnam or any developing country to systematically examine the extent to which psychosocial factors mediate the relationship between SES and depression. The study contributes new evidence regarding the burden of depression in Vietnam. The findings have practical relevance for advocacy, for mental health promotion and health-care services, and point to the need for programs that focus on building a sense of personal mastery and self esteem. More broadly, the work presented in this Thesis contributes to the international scientific literature on the social determinants of depression.
Resumo:
Background and significance: Older adults with chronic diseases are at increasing risk of hospital admission and readmission. Approximately 75% of adults have at least one chronic condition, and the odds of developing a chronic condition increases with age. Chronic diseases consume about 70% of the total Australian health expenditure, and about 59% of hospital events for chronic conditions are potentially preventable. These figures have brought to light the importance of the management of chronic disease among the growing older population. Many studies have endeavoured to develop effective chronic disease management programs by applying social cognitive theory. However, limited studies have focused on chronic disease self-management in older adults at high risk of hospital readmission. Moreover, although the majority of studies have covered wide and valuable outcome measures, there is scant evidence on examining the fundamental health outcomes such as nutritional status, functional status and health-related quality of life. Aim: The aim of this research was to test social cognitive theory in relation to self-efficacy in managing chronic disease and three health outcomes, namely nutritional status, functional status, and health-related quality of life, in older adults at high risk of hospital readmission. Methods: A cross-sectional study design was employed for this research. Three studies were undertaken. Study One examined the nutritional status and validation of a nutritional screening tool; Study Two explored the relationships between participants. characteristics, self-efficacy beliefs, and health outcomes based on the study.s hypothesized model; Study Three tested a theoretical model based on social cognitive theory, which examines potential mechanisms of the mediation effects of social support and self-efficacy beliefs. One hundred and fifty-seven patients aged 65 years and older with a medical admission and at least one risk factor for readmission were recruited. Data were collected from medical records on demographics, medical history, and from self-report questionnaires. The nutrition data were collected by two registered nurses. For Study One, a contingency table and the kappa statistic was used to determine the validity of the Malnutrition Screening Tool. In Study Two, standard multiple regression, hierarchical multiple regression and logistic regression were undertaken to determine the significant influential predictors for the three health outcome measures. For Study Three, a structural equation modelling approach was taken to test the hypothesized self-efficacy model. Results: The findings of Study One suggested that a high prevalence of malnutrition continues to be a concern in older adults as the prevalence of malnutrition was 20.6% according to the Subjective Global Assessment. Additionally, the findings confirmed that the Malnutrition Screening Tool is a valid nutritional screening tool for hospitalized older adults at risk of readmission when compared to the Subjective Global Assessment with high sensitivity (94%), and specificity (89%) and substantial agreement between these two methods (k = .74, p < .001; 95% CI .62-.86). Analysis data for Study Two found that depressive symptoms and perceived social support were the two strongest influential factors for self-efficacy in managing chronic disease in a hierarchical multiple regression. Results of multivariable regression models suggested advancing age, depressive symptoms and less tangible support were three important predictors for malnutrition. In terms of functional status, a standard regression model found that social support was the strongest predictor for the Instrumental Activities of Daily Living, followed by self-efficacy in managing chronic disease. The results of standard multiple regression revealed that the number of hospital readmission risk factors adversely affected the physical component score, while depressive symptoms and self-efficacy beliefs were two significant predictors for the mental component score. In Study Three, the results of the structural equation modelling found that self-efficacy partially mediated the effect of health characteristics and depression on health-related quality of life. The health characteristics had strong direct effects on functional status and body mass index. The results also indicated that social support partially mediated the relationship between health characteristics and functional status. With regard to the joint effects of social support and self-efficacy, social support fully mediated the effect of health characteristics on self-efficacy, and self-efficacy partially mediated the effect of social support on functional status and health-related quality of life. The results also demonstrated that the models fitted the data well with relative high variance explained by the models, implying the hypothesized constructs under discussion were highly relevant, and hence the application for social cognitive theory in this context was supported. Conclusion: This thesis highlights the applicability of social cognitive theory on chronic disease self-management in older adults at risk of hospital readmission. Further studies are recommended to validate and continue to extend the development of social cognitive theory on chronic disease self-management in older adults to improve their nutritional and functional status, and health-related quality of life.
Resumo:
This study aimed to gauge the presence of markers of chronic disease, as a basis for food and nutrition policy in correctional facilities. One hundred and twenty offenders, recruited from a Queensland Correctional Centre, provided informed consent and completed both dietary interviews and physical measurements. Mean age of the sample was 35.5 ± 12 years (range = 19–77 yrs); mean age of the total population (n = 945) was 32.8 ± 10 years (range = 19–80 yrs). Seventy-nine participants also provided fasting blood samples. The mean body mass index (BMI) was 27 ± 3.5 kg/m2; 72% having a BMI > 25 kg/m2. Thirty-three percent were classified overweight or obese using waist circumference (mean = 92 ± 10 cm). Mean blood pressure measurement was systolic = 130 ± 14 mmHg and diastolic = 73 ± 10 mmHg. Twenty-four percent were classified as hypertensive of whom three were on antihypertensive medication. Eighteen percent had elevated triglycerides, and 40% unfavourable total cholesterol to HDL ratios. Homeostatic Model Assessment (HOMA scores) were calculated from glucose and insulin. Four participants were insulin resistant, two of whom had known diabetes. Metabolic syndrome, based on waist circumference (adjusted for ethnicity), blood lipids, blood pressure and plasma glucose indicated that 25% (n = 20) were classified with metabolic syndrome. Eighty-four percent (n = 120) reported some physical activity each day, with 51 percent participating ≥two times daily. Fifty-four percent reported smoking with an additional 20% having smoked in the past. Findings suggest that waist circumference rather than weight and BMI only should be used in this group to determine weight status. The data suggest that markers of chronic disease are present and that food and nutrition policy must reflect this. Further analysis is being completed to determine relevant policy initiatives.
Resumo:
Bananas are one of the world's most important food crops, providing sustenance and income for millions of people in developing countries and supporting large export industries. Viruses are considered major constraints to banana production, germplasm multiplication and exchange, and to genetic improvement of banana through traditional breeding. In Africa, the two most important virus diseases are bunchy top, caused by Banana bunchy top virus (BBTV), and banana streak disease, caused by Banana streak virus (BSV). BBTV is a serious production constraint in a number of countries within/bordering East Africa, such as Burundi, Democratic Republic of Congo, Malawi, Mozambique, Rwanda and Zambia, but is not present in Kenya, Tanzania and Uganda. Additionally, epidemics of banana streak disease are occurring in Kenya and Uganda. The rapidly growing tissue culture (TC) industry within East Africa, aiming to provide planting material to banana farmers, has stimulated discussion about the need for virus indexing to certify planting material as virus-free. Diagnostic methods for BBTV and BSV have been reported and, for BBTV, PCR-based assays are reliable and relatively straightforward. However for BSV, high levels of serological and genetic variability and the presence of endogenous virus sequences within the banana genome complicate diagnosis. Uganda has been shown to contain the greatest diversity in BSV isolates found anywhere in the world. A broad-spectrum diagnostic test for BSV detection, which can discriminate between endogenous and episomal BSV sequences, is a priority. This PhD project aimed to establish diagnostic methods for banana viruses, with a particular focus on the development of novel methods for BSV detection, and to use these diagnostic methods for the detection and characterisation of banana viruses in East Africa. A novel rolling-circle amplification (RCA) method was developed for the detection of BSV. Using samples of Banana streak MY virus (BSMYV) and Banana streak OL virus (BSOLV) from Australia, this method was shown to distinguish between endogenous and episomal BSV sequences in banana plants. The RCA assay was used to screen a collection of 56 banana samples from south-west Uganda for BSV. RCA detected at least five distinct BSV isolates in these samples, including BSOLV and Banana streak GF virus (BSGFV) as well as three BSV isolates (Banana streak Uganda-I, -L and -M virus) for which only partial sequences had been previously reported. These latter three BSV had only been detected using immuno-capture (IC)-PCR and thus were possible endogenous sequences. In addition to its ability to detect BSV, the RCA protocol was also demonstrated to detect other viruses within the family Caulimoviridae, including Sugar cane bacilliform virus, and Cauliflower mosaic virus. Using the novel RCA method, three distinct BSV isolates from both Kenya and Uganda were identified and characterised. The complete genome of these isolates was sequenced and annotated. All six isolates were shown to have a characteristic badnavirus genome organisation with three open reading frames (ORFs) and the large polyprotein encoded by ORF 3 was shown to contain conserved amino acid motifs for movement, aspartic protease, reverse transcriptase and ribonuclease H activities. As well, several sequences important for expression and replication of the virus genome were identified including the conserved tRNAmet primer binding site present in the intergenic region of all badnaviruses. Based on the International Committee on Taxonomy of Viruses (ICTV) guidelines for species demarcation in the genus Badnavirus, these six isolates were proposed as distinct species, and named Banana streak UA virus (BSUAV), Banana streak UI virus (BSUIV), Banana streak UL virus (BSULV), Banana streak UM virus (BSUMV), Banana streak CA virus (BSCAV) and Banana streak IM virus (BSIMV). Using PCR with species-specific primers designed to each isolate, a genotypically diverse collection of 12 virus-free banana cultivars were tested for the presence of endogenous sequences. For five of the BSV no amplification was observed in any cultivar tested, while for BSIMV, four positive samples were identified in cultivars with a B-genome component. During field visits to Kenya, Tanzania and Uganda, 143 samples were collected and assayed for BSV. PCR using nine sets of species-specific primers, and RCA, were compared for BSV detection. For five BSV species with no known endogenous counterpart (namely BSCAV, BSUAV, BSUIV, BSULV and BSUMV), PCR was used to detect 30 infections from the 143 samples. Using RCA, 96.4% of these samples were considered positive, with one additional sample detected using RCA which was not positive using PCR. For these five BSV, PCR and RCA were both useful for identifying infected samples, irrespective of the host cultivar genotype (Musa A- or B-genome components). For four additional BSV with known endogenous counterparts in the M. balbisiana genome (BSOLV, BSGFV, BSMYV and BSIMV), PCR was shown to detect 75 infections from the 143 samples. In 30 samples from cultivars with an A-only genome component there was 96.3% agreement between PCR positive samples and detection using RCA, again demonstrating either PCR or RCA are suitable methods for detection. However, in 45 samples from cultivars with some B-genome component, the level of agreement between PCR positive samples and RCA positive samples was 70.5%. This suggests that, in cultivars with some B-genome component, many infections were detected using PCR which were the result of amplification of endogenous sequences. In these latter cases, RCA or another method which discriminates between endogenous and episomal sequences, such as immuno-capture PCR, is needed to diagnose episomal BSV infection. Field visits were made to Malawi and Rwanda to collect local isolates of BBTV for validation of a PCR-based diagnostic assay. The presence of BBTV in samples of bananas with bunchy top disease was confirmed in 28 out of 39 samples from Malawi and all nine samples collected in Rwanda, using PCR and RCA. For three isolates, one from Malawi and two from Rwanda, the complete nucleotide sequences were determined and shown to have a similar genome organisation to previously published BBTV isolates. The two isolates from Rwanda had at least 98.1% nucleotide sequence identity between each of the six DNA components, while the similarity between isolates from Rwanda and Malawi was between 96.2% and 99.4% depending on the DNA component. At the amino acid level, similarities in the putative proteins encoded by DNA-R, -S, -M, - C and -N were found to range between 98.8% to 100%. In a phylogenetic analysis, the three East African isolates clustered together within the South Pacific subgroup of BBTV isolates. Nucleotide sequence comparison to isolates of BBTV from outside Africa identified India as the possible origin of East African isolates of BBTV.
Resumo:
BACKGROUND: Hallux valgus (HV) is a foot deformity commonly seen in medical practice, often accompanied by significant functional disability and foot pain. Despite frequent mention in a diverse body of literature, a precise estimate of the prevalence of HV is difficult to ascertain. The purpose of this systematic review was to investigate prevalence of HV in the overall population and evaluate the influence of age and gender. METHODS: Electronic databases (Medline, Embase, and CINAHL) and reference lists of included papers were searched to June 2009 for papers on HV prevalence without language restriction. MeSH terms and keywords were used relating to HV or bunions, prevalence and various synonyms. Included studies were surveys reporting original data for prevalence of HV or bunions in healthy populations of any age group. Surveys reporting prevalence data grouped with other foot deformities and in specific disease groups (e.g. rheumatoid arthritis, diabetes) were excluded. Two independent investigators quality rated all included papers on the Epidemiological Appraisal Instrument. Data on raw prevalence, population studied and methodology were extracted. Prevalence proportions and the standard error were calculated, and meta-analysis was performed using a random effects model. RESULTS: A total of 78 papers reporting results of 76 surveys (total 496,957 participants) were included and grouped by study population for meta-analysis. Pooled prevalence estimates for HV were 23% in adults aged 18-65 years (CI: 16.3 to 29.6) and 35.7% in elderly people aged over 65 years (CI: 29.5 to 42.0). Prevalence increased with age and was higher in females [30% (CI: 22 to 38)] compared to males [13% (CI: 9 to 17)]. Potential sources of bias were sampling method, study quality and method of HV diagnosis. CONCLUSIONS: Notwithstanding the wide variation in estimates, it is evident that HV is prevalent; more so in females and with increasing age. Methodological quality issues need to be addressed in interpreting reports in the literature and in future research.
Resumo:
Introduction: Smoking status in outpatients with chronic obstructive pulmonary disease (COPD) has been associated with a low body mass index (BMI) and reduced mid-arm muscle circumference (Cochrane & Afolabi, 2004). Individuals with COPD identified as malnourished have also been found to be twice as likely to die within 1 year compared to non-malnourished patients (Collins et al., 2010). Although malnutrition is both preventable and treatable, it is not clear what influence current smoking status, another modifiable risk factor, has on malnutrition risk. The current study aimed to establish the influence of smoking status on malnutrition risk and 1-year mortality in outpatients with COPD. Methods: A prospective nutritional screening survey was carried out between July 2008 and May 2009 at a large teaching hospital (Southampton General Hospital) and a smaller community hospital within Hampshire (Lymington New Forest Hospital). In total, 424 outpatients with a diagnosis of COPD were routinely screened using the ‘Malnutrition Universal Screening Tool’, ‘MUST’ (Elia, 2003); 222 males, 202 females; mean (SD) age 73 (9.9) years; mean (SD) BMI 25.9 (6.4) kg m−2. Smoking status on the date of screening was obtained for 401 of the outpatients. Severity of COPD was assessed using the GOLD criteria, and social deprivation determined using the Index of Multiple Deprivation (Nobel et al., 2008). Results: The overall prevalence of malnutrition (medium + high risk) was 22%, with 32% of current smokers at risk (who accounted for 19% of the total COPD population). In comparison, 19% of nonsmokers and ex-smokers were likely to be malnourished [odds ratio, 1.965; 95% confidence interval (CI), 1.133–3.394; P = 0.015]. Smoking status remained an independent risk factor for malnutrition even after adjustment for age, social deprivation and disease-severity (odds ratio, 2.048; 95% CI, 1.085–3.866; P = 0.027) using binary logistic regression. After adjusting for age, disease severity, social deprivation, smoking status, malnutrition remained a significant predictor of 1-year mortality [odds ratio (medium + high risk versus low risk), 2.161; 95% CI, 1.021–4.573; P = 0.044], whereas smoking status did not (odds ratio for smokers versus ex-smokers + nonsmokers was 1.968; 95% CI, 0.788–4.913; P = 0.147). Discussion: This study highlights the potential importance of combined nutritional support and smoking cessation in order to treat malnutrition. The close association between smoking status and malnutrition risk in COPD suggests that smoking is an important consideration in the nutritional management of malnourished COPD outpatients. Conclusions: Smoking status in COPD outpatients is a significant independent risk factor for malnutrition and a weaker (nonsignificant) predictor of 1-year mortality. Malnutrition significantly predicted 1 year mortality. References: Cochrane, W.J. & Afolabi, O.A. (2004) Investigation into the nutritional status, dietary intake and smoking habits of patients with chronic obstructive pulmonary disease. J. Hum. Nutr. Diet.17, 3–11. Collins, P.F., Stratton, R.J., Kurukulaaratchym R., Warwick, H. Cawood, A.L. & Elia, M. (2010) ‘MUST’ predicts 1-year survival in outpatients with chronic obstructive pulmonary disease. Clin. Nutr.5, 17. Elia, M. (Ed) (2003) The ‘MUST’ Report. BAPEN. http://www.bapen.org.uk (accessed on March 30 2011). Nobel, M., McLennan, D., Wilkinson, K., Whitworth, A. & Barnes, H. (2008) The English Indices of Deprivation 2007. http://www.communities.gov.uk (accessed on March 30 2011).
Resumo:
Deprivation assessed using the Index of Multiple Deprivation (IMD) has been shown to be an independent risk factor for both malnutrition and mortality in outpatients with chronic obstructive pulmonary disease (COPD) (Collins et al., 2010a, b). IMD consists of a range of different deprivation domains, although it is unclear which ones are most closely linked to malnutrition. The aim of the current study was to investigate whether the relationship between malnutrition and deprivation was a general one, affecting all domains in a consistent manner, or specific, affecting only certain domains.
Resumo:
Background The adverse consequences of lymphedema following breast cancer in relation to physical function and quality of life are clear; however, its potential relationship with survival has not been investigated. Our purpose was to determine the prevalence of lymphedema and associated upper-body symptoms at 6 years following breast cancer and to examine the prognostic significance of lymphedema with respect to overall 6-year survival (OS). Methods and Results A population-based sample of Australian women (n=287) diagnosed with invasive, unilateral breast cancer was followed for a median of 6.6 years and prospectively assessed for lymphedema (using bioimpedance spectroscopy [BIS], sum of arm circumferences [SOAC], and self-reported arm swelling), a range of upper-body symptoms, and vital status. OS was measured from date of diagnosis to date of death or last follow-up. Kaplan-Meier methods were used to calculate OS and Cox proportional hazards models quantified the risk associated with lymphedema. Approximately 45% of women had reported at least one moderate to extreme symptom at 6.6 years postdiagnosis, while 34% had shown clinical evidence of lymphedema, and 48% reported arm swelling at least once since baseline assessment. A total of 27 (9.4%) women died during the follow-up period, and lymphedema, diagnosed by BIS or SOAC between 6–18 months postdiagnosis, predicted mortality (BIS: HR=2.5; 95% CI: 0.9, 6.8, p=0.08; SOAC: 3.0; 95% CI: 1.1, 8.7, p=0.04). There was no association (HR=1.2; 95% CI: 0.5, 2.6, p=0.68) between self-reported arm swelling and OS. Conclusions These findings suggest that lymphedema may influence survival following breast cancer treatment and warrant further investigation in other cancer cohorts and explication of a potential underlying biology.