902 resultados para Canavan disease, leukodystrophy, aspartoacylase, adeno-associated viral vectors
Resumo:
Efficient management of domestic wastewater is a primary requirement for human well being. Failure to adequately address issues of wastewater collection, treatment and disposal can lead to adverse public health and environmental impacts. The increasing spread of urbanisation has led to the conversion of previously rural land into urban developments and the more intensive development of semi urban areas. However the provision of reticulated sewerage facilities has not kept pace with this expansion in urbanisation. This has resulted in a growing dependency on onsite sewage treatment. Though considered only as a temporary measure in the past, these systems are now considered as the most cost effective option and have become a permanent feature in some urban areas. This report is the first of a series of reports to be produced and is the outcome of a research project initiated by the Brisbane City Council. The primary objective of the research undertaken was to relate the treatment performance of onsite sewage treatment systems with soil conditions at site, with the emphasis being on septic tanks. This report consists of a ‘state of the art’ review of research undertaken in the arena of onsite sewage treatment. The evaluation of research brings together significant work undertaken locally and overseas. It focuses mainly on septic tanks in keeping with the primary objectives of the project. This report has acted as the springboard for the later field investigations and analysis undertaken as part of the project. Septic tanks still continue to be used widely due to their simplicity and low cost. Generally the treatment performance of septic tanks can be highly variable due to numerous factors, but a properly designed, operated and maintained septic tank can produce effluent of satisfactory quality. The reduction of hydraulic surges from washing machines and dishwashers, regular removal of accumulated septage and the elimination of harmful chemicals are some of the practices that can improve system performance considerably. The relative advantages of multi chamber over single chamber septic tanks is an issue that needs to be resolved in view of the conflicting research outcomes. In recent years, aerobic wastewater treatment systems (AWTS) have been gaining in popularity. This can be mainly attributed to the desire to avoid subsurface effluent disposal, which is the main cause of septic tank failure. The use of aerobic processes for treatment of wastewater and the disinfection of effluent prior to disposal is capable of producing effluent of a quality suitable for surface disposal. However the field performance of these has been disappointing. A significant number of these systems do not perform to stipulated standards and quality can be highly variable. This is primarily due to houseowner neglect or ignorance of correct operational and maintenance procedures. The other problems include greater susceptibility to shock loadings and sludge bulking. As identified in literature a number of design features can also contribute to this wide variation in quality. The other treatment processes in common use are the various types of filter systems. These include intermittent and recirculating sand filters. These systems too have their inherent advantages and disadvantages. Furthermore as in the case of aerobic systems, their performance is very much dependent on individual houseowner operation and maintenance practices. In recent years the use of biofilters has attracted research interest and particularly the use of peat. High removal rates of various wastewater pollutants have been reported in research literature. Despite these satisfactory results, leachate from peat has been reported in various studies. This is an issue that needs further investigations and as such biofilters can still be considered to be in the experimental stage. The use of other filter media such as absorbent plastic and bark has also been reported in literature. The safe and hygienic disposal of treated effluent is a matter of concern in the case of onsite sewage treatment. Subsurface disposal is the most common and the only option in the case of septic tank treatment. Soil is an excellent treatment medium if suitable conditions are present. The processes of sorption, filtration and oxidation can remove the various wastewater pollutants. The subsurface characteristics of the disposal area are among the most important parameters governing process performance. Therefore it is important that the soil and topographic conditions are taken into consideration in the design of the soil absorption system. Seepage trenches and beds are the common systems in use. Seepage pits or chambers can be used where subsurface conditions warrant, whilst above grade mounds have been recommended for a variety of difficult site conditions. All these systems have their inherent advantages and disadvantages and the preferable soil absorption system should be selected based on site characteristics. The use of gravel as in-fill for beds and trenches is open to question. It does not contribute to effluent treatment and has been shown to reduce the effective infiltrative surface area. This is due to physical obstruction and the migration of fines entrained in the gravel, into the soil matrix. The surface application of effluent is coming into increasing use with the advent of aerobic treatment systems. This has the advantage that treatment is undertaken on the upper soil horizons, which is chemically and biologically the most effective in effluent renovation. Numerous research studies have demonstrated the feasibility of this practice. However the overriding criteria is the quality of the effluent. It has to be of exceptionally good quality in order to ensure that there are no resulting public health impacts due to aerosol drift. This essentially is the main issue of concern, due to the unreliability of the effluent quality from aerobic systems. Secondly, it has also been found that most householders do not take adequate care in the operation of spray irrigation systems or in the maintenance of the irrigation area. Under these circumstances surface disposal of effluent should be approached with caution and would require appropriate householder education and stringent compliance requirements. However despite all this, the efficiency with which the process is undertaken will ultimately rest with the individual householder and this is where most concern rests. Greywater too should require similar considerations. Surface irrigation of greywater is currently being permitted in a number of local authority jurisdictions in Queensland. Considering the fact that greywater constitutes the largest fraction of the total wastewater generated in a household, it could be considered to be a potential resource. Unfortunately in most circumstances the only pretreatment that is required to be undertaken prior to reuse is the removal of oil and grease. This is an issue of concern as greywater can considered to be a weak to medium sewage as it contains primary pollutants such as BOD material and nutrients and may also include microbial contamination. Therefore its use for surface irrigation can pose a potential health risk. This is further compounded by the fact that most householders are unaware of the potential adverse impacts of indiscriminate greywater reuse. As in the case of blackwater effluent reuse, there have been suggestions that greywater should also be subjected to stringent guidelines. Under these circumstances the surface application of any wastewater requires careful consideration. The other option available for the disposal effluent is the use of evaporation systems. The use of evapotranspiration systems has been covered in this report. Research has shown that these systems are susceptible to a number of factors and in particular to climatic conditions. As such their applicability is location specific. Also the design of systems based solely on evapotranspiration is questionable. In order to ensure more reliability, the systems should be designed to include soil absorption. The successful use of these systems for intermittent usage has been noted in literature. Taking into consideration the issues discussed above, subsurface disposal of effluent is the safest under most conditions. This is provided the facility has been designed to accommodate site conditions. The main problem associated with subsurface disposal is the formation of a clogging mat on the infiltrative surfaces. Due to the formation of the clogging mat, the capacity of the soil to handle effluent is no longer governed by the soil’s hydraulic conductivity as measured by the percolation test, but rather by the infiltration rate through the clogged zone. The characteristics of the clogging mat have been shown to be influenced by various soil and effluent characteristics. Secondly, the mechanisms of clogging mat formation have been found to be influenced by various physical, chemical and biological processes. Biological clogging is the most common process taking place and occurs due to bacterial growth or its by-products reducing the soil pore diameters. Biological clogging is generally associated with anaerobic conditions. The formation of the clogging mat provides significant benefits. It acts as an efficient filter for the removal of microorganisms. Also as the clogging mat increases the hydraulic impedance to flow, unsaturated flow conditions will occur below the mat. This permits greater contact between effluent and soil particles thereby enhancing the purification process. This is particularly important in the case of highly permeable soils. However the adverse impacts of the clogging mat formation cannot be ignored as they can lead to significant reduction in the infiltration rate. This in fact is the most common cause of soil absorption systems failure. As the formation of the clogging mat is inevitable, it is important to ensure that it does not impede effluent infiltration beyond tolerable limits. Various strategies have been investigated to either control clogging mat formation or to remediate its severity. Intermittent dosing of effluent is one such strategy that has attracted considerable attention. Research conclusions with regard to short duration time intervals are contradictory. It has been claimed that the intermittent rest periods would result in the aerobic decomposition of the clogging mat leading to a subsequent increase in the infiltration rate. Contrary to this, it has also been claimed that short duration rest periods are insufficient to completely decompose the clogging mat, and the intermediate by-products that form as a result of aerobic processes would in fact lead to even more severe clogging. It has been further recommended that the rest periods should be much longer and should be in the range of about six months. This entails the provision of a second and alternating seepage bed. The other concepts that have been investigated are the design of the bed to meet the equilibrium infiltration rate that would eventuate after clogging mat formation; improved geometry such as the use of seepage trenches instead of beds; serial instead of parallel effluent distribution and low pressure dosing of effluent. The use of physical measures such as oxidation with hydrogen peroxide and replacement of the infiltration surface have been shown to be only of short-term benefit. Another issue of importance is the degree of pretreatment that should be provided to the effluent prior to subsurface application and the influence exerted by pollutant loadings on the clogging mat formation. Laboratory studies have shown that the total mass loadings of BOD and suspended solids are important factors in the formation of the clogging mat. It has also been found that the nature of the suspended solids is also an important factor. The finer particles from extended aeration systems when compared to those from septic tanks will penetrate deeper into the soil and hence will ultimately cause a more dense clogging mat. However the importance of improved pretreatment in clogging mat formation may need to be qualified in view of other research studies. It has also shown that effluent quality may be a factor in the case of highly permeable soils but this may not be the case with fine structured soils. The ultimate test of onsite sewage treatment system efficiency rests with the final disposal of effluent. The implication of system failure as evidenced from the surface ponding of effluent or the seepage of contaminants into the groundwater can be very serious as it can lead to environmental and public health impacts. Significant microbial contamination of surface and groundwater has been attributed to septic tank effluent. There are a number of documented instances of septic tank related waterborne disease outbreaks affecting large numbers of people. In a recent incident, the local authority was found liable for an outbreak of viral hepatitis A and not the individual septic tank owners as no action had been taken to remedy septic tank failure. This illustrates the responsibility placed on local authorities in terms of ensuring the proper operation of onsite sewage treatment systems. Even a properly functioning soil absorption system is only capable of removing phosphorus and microorganisms. The nitrogen remaining after plant uptake will not be retained in the soil column, but will instead gradually seep into the groundwater as nitrate. Conditions for nitrogen removal by denitrification are not generally present in a soil absorption bed. Dilution by groundwater is the only treatment available for reducing the nitrogen concentration to specified levels. Therefore based on subsurface conditions, this essentially entails a maximum allowable concentration of septic tanks in a given area. Unfortunately nitrogen is not the only wastewater pollutant of concern. Relatively long survival times and travel distances have been noted for microorganisms originating from soil absorption systems. This is likely to happen if saturated conditions persist under the soil absorption bed or due to surface runoff of effluent as a result of system failure. Soils have a finite capacity for the removal of phosphorus. Once this capacity is exceeded, phosphorus too will seep into the groundwater. The relatively high mobility of phosphorus in sandy soils have been noted in the literature. These issues have serious implications in the design and siting of soil absorption systems. It is not only important to ensure that the system design is based on subsurface conditions but also the density of these systems in given areas is a critical issue. This essentially involves the adoption of a land capability approach to determine the limitations of an individual site for onsite sewage disposal. The most limiting factor at a particular site would determine the overall capability classification for that site which would also dictate the type of effluent disposal method to be adopted.
Resumo:
Scope: We examined whether dietary supplementation with fish oil modulates inflammation, fibrosis and oxidative stress following obstructive renal injury. Methods and results: Three groups of Sprague-Dawley rats (n = 16 per group) were fed for 4 wk on normal rat chow (oleic acid), chow containing fish oil (33 g eicosapentaenoic acid and 26 g docosahexaenoic acid per kg diet), or chow containing safflower oil (60 g linoleic acid per kg diet). All diets contained 7% fat. After 4 wk, the rats were further subdivided into four smaller groups (n = 4 per group). Unilateral ureteral obstruction was induced in three groups (for 4, 7 and 14 days). The fourth group for each diet did not undergo surgery, and was sacrificed as controls at 14 days. When rats were sacrificed, plasma and portions of the kidneys were removed and frozen; other portions of kidney tissue were fixed and prepared for histology. Compared with normal chow and safflower oil, fish oil attenuated collagen deposition, macrophage infiltration, TGF-beta expression, apoptosis, and tissue levels of arachidonic acid, MIP-1 alpha, IL-1 beta, MCP-1 and leukotriene B(4). Compared with normal chow, fish oil increased the expression of HO-1 protein in kidney tissue. Conclusions: Fish oil intake reduced inflammation, fibrosis and oxidative stress following obstructive renal injury.
Resumo:
Diabetes is one of the greatest public health challenges to face Australia. It is already Australia’s leading cause of kidney failure, blindness (in those under 60 years) and lower limb amputation, and causes significant cardiovascular disease. Australia’s diabetes amputation rate is one of the worst in the developed world, and appears to have significantly increased in the last decade, whereas some other diabetes complication rates appear to have decreased. This paper aims to compare the national burden of disease for the four major diabetes-related complications and the availability of government funding to combat these complications, in order to determine where diabetes foot disease ranks in Australia. Our review of relevant national literature indicates foot disease ranks second overall in burden of disease and last in evidenced-based government funding to combat these diabetes complications. This suggests public funding to address foot disease in Australia is disproportionately low when compared to funding dedicated to other diabetes complications. There is ample evidence that appropriate government funding of evidence-based care improves all diabetes complication outcomes and reduces overall costs. Numerous diverse Australian peak bodies have now recommended similar diabetes foot evidence-based strategies that have reduced diabetes amputation rates and associated costs in other developed nations. It would seem intuitive that “it’s time” to fund these evidence-based strategies for diabetes foot disease in Australia as well.
Resumo:
Background Lower extremity amputation results in significant global morbidity and mortality. Australia appears to have a paucity of studies investigating lower extremity amputation. The primary aim of this retrospective study was to investigate key conditions associated with lower extremity amputations in an Australian population. Secondary objectives were to determine the influence of age and sex on lower extremity amputations, and the reliability of hospital coded amputations. Methods: Lower extremity amputation cases performed at the Princess Alexandra Hospital (Brisbane, Australia) between July 2006 and June 2007 were identified through the relevant hospital discharge dataset (n = 197). All eligible clinical records were interrogated for age, sex, key condition associated with amputation, amputation site, first ever amputation status and the accuracy of the original hospital coding. Exclusion criteria included records unavailable for audit and cases where the key condition was unable to be determined. Chi-squared, t-tests, ANOVA and post hoc tests were used to determine differences between groups. Kappa statistics were used to measure reliability between coded and audited amputations. A minimum significance level of p < 0.05 was used throughout. Results: One hundred and eighty-six cases were eligible and audited. Overall 69% were male, 56% were first amputations, 54% were major amputations, and mean age was 62 ± 16 years. Key conditions associated included type 2 diabetes (53%), peripheral arterial disease (non-diabetes) (18%), trauma (8%), type 1 diabetes (7%) and malignant tumours (5%). Differences in ages at amputation were associated with trauma 36 ± 10 years, type 1 diabetes 52 ± 12 years and type 2 diabetes 67 ± 10 years (p < 0.01). Reliability of original hospital coding was high with Kappa values over 0.8 for all variables. Conclusions: This study, the first in over 20 years to report on all levels of lower extremity amputations in Australia, found that people undergoing amputation are more likely to be older, male and have diabetes. It is recommended that large prospective studies are implemented and national lower extremity amputation rates are established to address the large preventable burden of lower extremity amputation in Australia.
Resumo:
Abstract Background: As low HDL cholesterol levels are a risk factor for cardiovascular disease, raising HDL cholesterol substantially by inhibiting or modulating cholesteryl ester transfer protein (CETP) may be useful in coronary artery disease. The first CETP inhibitor that went into clinical trial, torcetrapib, was shown to increase the levels of HDL cholesterol, but it also increased cardiovascular outcomes, probably due to an increase in blood pressure and aldosterone secretion, by an off-target mechanism/s. Objective/methods: Dalcetrapib is a new CETP modulator that increases the levels of HDL cholesterol, but does not increase blood pressure or aldosterone secretion. The objective was to evaluate a paper describing the effects of dalcetrapib on carotid and aortic wall thickness in subjects with, or at high risk, of coronary artery disease; the dal-PLAQUE study. Results: dal-PLAQUE showed that dalcetrapib reduced the progression of atherosclerosis and may also reduce the vascular inflammation associated with this, in subjects with, or with high risk of, coronary heart disease, who were already taking statins. Conclusions: These results suggest that modulating CETP with dalcetrapib may be a beneficial mechanism in cardiovascular disease. The results of the dal-HEART series, which includes dal-PLAQUE 1 and 2, and dal-OUTCOMES, when complete, will provide more definitive information about the benefit, or not, of dalcetrapib in coronary artery disease.
Resumo:
Background and aims: Lower-limb lymphoedema is a serious and feared sequela after treatment for gynaecological cancer. Given the limited prospective data on incidence of and risk factors for lymphoedema after treatment for gynaecological cancer we initiated a prospective cohort study in 2008. Methods: Data were available for 353 women with malignant disease. Participants were assessed before treatment and at regular intervals after treatment for two years. Follow-up visits were grouped into time-periods of six weeks to six months (time 1), nine months to 15 months (time 2), and 18 months to 24 months (time 3). Preliminary data analyses were undertaken up to time 2 using generalised estimating equations to model the repeated measures data of Functional Assessment of Cancer Therapy-General (FACT-G) quality of life (QoL) scores and self-reported swelling at each follow-up period (best-fitting covariance structure). Results: Depending on the time-period, between 30% and 40% of patients self-reported swelling of the lower limb. The QoL of those with self-reported swelling was lower at all time-periods compared with those who did not have swelling. Mean (95% CI) FACT-G scores at time 0, 1 and 2 were 80.7 (78.2, 83.2), 83.0 (81.0, 85.0) and 86.3 (84.2, 88.4), respectively for those with swelling and 85.0 (83.0, 86.9), 86.0 (84.1, 88.0) and 88.9 (87.0, 90.7), respectively for those without swelling. Conclusions: Lower-limb swelling adversely influences QoL and change in QoL over time in patients with gynaecological cancer.
Resumo:
Objectives: To evaluate the clinical value of pre-operative serum CA125 in predicting the presence of extra-uterine disease in patients with apparent early stage endometrial cancer. Methods: Between October 6, 2005 and June 17, 2010, 760 patients were enrolled in an international, multicentre, prospective randomized trial (LACE) comparing laparotomy with laparoscopy in the management of endometrial cancer apparently confined to the uterus. This study is based on data from 657 patients with endometrial adenocarcinoma who had a pre-operative serum CA125 value, and was undertaken to correlate pre-operative serum CA125 with final stage. Results: Using a pre-operative CA-125 cutpoint of 30U/ml was associated with the smallest misclassification error (14.5%) using a multiple cross-validation method. Median pre-operative serum CA-125 was 14U/ml, and using a cutpoint of 30U/ml, 14.9% of patients had elevated CA-125 levels. Of 98 patients with elevated CA-125 level, 36 (36.7%) had evidence of extra-uterine disease. Of the 116 patients (17.7%) with evidence of extra-uterine disease, 31.0% had elevated CA-125 level. In univariate and multivariate logistic regression analysis, only pre-operative CA-125 level was found to be associated with extra-uterine spread of disease. Utilising a cutpoint of 30U/ml achieved a sensitivity, specificity, positive predictive value and negative predictive value of 31.0%, 88.5%, 36.7% and 85.7% respectively. Overall, 326/657 (49.6%) of patients had full surgical staging involving lymph node dissection. When analysis was limited to patients that had undergone full surgical staging, the outcomes remained essentially unchanged. Conclusions: Elevated CA-125 above 30U/ml in patients with apparent early stage disease is associated with a sensitivity of 31.0% and specificity of 88.5% in detecting extra-uterine disease. Pre-operative identification of this risk factor may assist to triage patients to tertiary centres and comprehensive surgical staging.
Resumo:
Kallikrein 14 (KLK14) has been proposed as a useful prognostic marker in prostate cancer, with expression reported to be associated with tumour characteristics such as higher stage and Gleason score. KLK14 tumour expression has also shown the potential to predict prostate cancer patients at risk of disease recurrence after radical prostatectomy. The KLKs are a remarkably hormone-responsive family of genes, although detailed studies of androgen regulation of KLK14 in prostate cancer have not been undertaken to date. Using in vitro studies, we have demonstrated that unlike many other prostatic KLK genes that are strictly androgen responsive, KLK14 is more broadly expressed and inversely androgen regulated in prostate cancer cells. Given these results and evidence that KLK14 may play a role in prostate cancer prognosis, we also investigated whether common genetic variants in the KLK14 locus are associated with risk and/or aggressiveness of prostate cancer in approximately 1200 prostate cancer cases and 1300 male controls. Of 41 single nucleotide polymorphisms assessed, three were associated with higher Gleason score (≥7): rs17728459 and rs4802765, both located upstream of KLK14, and rs35287116, which encodes a p.Gln33Arg substitution in the KLK14 signal peptide region. Our findings provide further support for KLK14 as a marker of prognosis in prostate cancer.
Resumo:
The family Geminiviridae comprises a group of plant-infecting circular ssDNA viruses that severely constrain agricultural production throughout the temperate regions of the world, and are a particularly serious threat to food security in sub-Saharan Africa. While geminiviruses exhibit considerable diversity in terms of their nucleotide sequences, genome structures, host ranges and insect vectors, the best characterised and economically most important of these viruses are those in the genus Begomovirus. Whereas begomoviruses are generally considered to be either monopartite (one ssDNA component) or bipartite (two circular ssDNA components called DNA-A and DNA-B), many apparently monopartite begomoviruses are associated with additional subviral ssDNA satellite components, called alpha- (DNA-αs) or betasatellites (DNA-βs). Additionally, subgenomic molecules, also known as defective interfering (DIs) DNAs that are usually derived from the parent helper virus through deletions of parts of its genome, are also associated with bipartite and monopartite begomoviruses. The past three decades have witnessed the emergence and diversification of various new begomoviral species and associated DI DNAs, in southern Africa, East Africa, and proximal Indian Ocean islands, which today threaten important vegetable and commercial crops such as, tobacco, cassava, tomato, sweet potato, and beans. This review aims to describe what is known about these viruses and their impacts on sustainable production in this sensitive region of the world. © 2012 by the authors licensee MDPI, Basel, Switzerland.
Resumo:
Human papillomaviruses (HPVs) are obligate epithelial pathogens and typically cause localized mucosal infections. We therefore hypothesized that T-cell responses to HPV antigens would be greater at sites of pathology than in the blood. Focusing on HPV-16 because of its association with cervical cancer, the magnitude of HPV-specific T-cell responses at the cervix was compared with those in the peripheral blood by intracellular cytokine staining following direct ex vivo stimulation with both virus-like particles assembled from the major capsid protein L1, and the major HPV oncoprotein, E7. We show that both CD4 + and CD8 + T cells from the cervix responded to the HPV-16 antigens and that interferon-γ (IFN-γ) production was HPV type-specific. Comparing HPV-specific T-cell IFN-γ responses at the cervix with those in the blood, we found that while CD4 + and CD8 + T-cell responses to L1 were significantly correlated between compartments (P = 0.02 and P = 0.05, respectively), IFN-γ responses in both T-cell subsets were significantly greater in magnitude at the cervix than in peripheral blood (P = 0.02 and P = 0.003, respectively). In contrast, both CD4 + and CD8 + T-cell IFN-γ responses to E7 were of similar magnitude in both compartments and CD8 + responses were significantly correlated between these distinct immunological compartments (P = 0.04). We therefore show that inflammatory T-cell responses against L1 (but not E7) demonstrate clear compartmental bias and the magnitude of these responses do reflect local viral replication but that correlation of HPV-specific responses between compartments indicates their linkage.
Resumo:
As cervical cancer is causally associated with 14 high-risk types of human papillomavirus (HPV), a successful HPV vaccine will have a major impact on this disease. Although some persistent HPV infections progress to cervical cancer, host immunity is generally able to clear most HPV infections. Both cell-mediated and antibody responses have been implicated in influencing the susceptibility, persistence or clearance of genital HPV infection. There have been two clinical trials that show that vaccines based on virus-like particles (VLPs) made from the major capsid protein, L1, are able to type specifically protect against cervical intra-epithelial neoplasia and infection. However, there is no evidence that even a mixed VLP vaccine will protect against types not included in the vaccine, and a major challenge that remains is how to engineer protection across a broader spectrum of viruses. Strategies for production of HPV vaccines using different vaccine vectors and different production systems are also reviewed. © 2005 Elsevier Ltd. All rights reserved.
Resumo:
Maize streak disease is a severe agricultural problem in Africa and the development of maize genotypes resistant to the causal agent, Maize streak virus (MSV), is a priority. A transgenic approach to engineering MSV-resistant maize was developed and tested in this study. A pathogen-derived resistance strategy was adopted by using targeted deletions and nucleotide-substitution mutants of the multifunctional MSV replication-associated protein gene (rep). Various rep gene constructs were tested for their efficacy in limiting replication of wild-type MSV by co-bombardment of maize suspension cells together with an infectious genomic clone of MSV and assaying replicative forms of DNA by quantitative PCR. Digitaria sanguinalis, an MSV-sensitive grass species used as a model monocot, was then transformed with constructs that had inhibited virus replication in the transient-expression system. Challenge experiments using leafhopper-transmitted MSV indicated significant MSV resistance - from highly resistant to immune - in regenerated transgenic D. sanguinalis lines. Whereas regenerated lines containing a mutated full-length rep gene displayed developmental and growth defects, those containing a truncated rep gene both were fertile and displayed no growth defects, making the truncated gene a suitable candidate for the development of transgenic MSV-resistant maize. © 2007 SGM.
Resumo:
Background: Panicum streak virus (PanSV; Family Geminiviridae; Genus Mastrevirus) is a close relative of Maize streak virus (MSV), the most serious viral threat to maize production in Africa. PanSV and MSV have the same leafhopper vector species, largely overlapping natural host ranges and similar geographical distributions across Africa and its associated Indian Ocean Islands. Unlike MSV, however, PanSV has no known economic relevance. Results: Here we report on 16 new PanSV full genome sequences sampled throughout Africa and use these together with others in public databases to reveal that PanSV and MSV populations in general share very similar patterns of genetic exchange and geographically structured diversity. A potentially important difference between the species, however, is that the movement of MSV strains throughout Africa is apparently less constrained than that of PanSV strains. Interestingly the MSV-A strain which causes maize streak disease is apparently the most mobile of all the PanSV and MSV strains investigated. Conclusion: We therefore hypothesize that the generally increased mobility of MSV relative to other closely related species such as PanSV, may have been an important evolutionary step in the eventual emergence of MSV-A as a serious agricultural pathogen. The GenBank accession numbers for the sequences reported in this paper are GQ415386-GQ415401. © 2009 Varsani et al; licensee BioMed Central Ltd.
Resumo:
Background Cervical cancer and infection with human immunodeficiency virus (HIV) are both important public health problems in South Africa (SA). The aim of this study was to determine the prevalence of cervical squamous intraepithelial lesions (SILs), high-risk human papillomavirus (HR-HPV), HPV viral load and HPV genotypes in HIV positive women initiating anti-retroviral (ARV) therapy. Methods A cross-sectional survey was conducted at an anti-retroviral (ARV) treatment clinic in Cape Town, SA in 2007. Cervical specimens were taken for cytological analysis and HPV testing. The Digene Hybrid Capture 2 (HC2) test was used to detect HR-HPV. Relative light units (RLU) were used as a measure of HPV viral load. HPV types were determined using the Roche Linear Array HPV Genotyping test. Crude associations with abnormal cytology were tested and multiple logistic regression was used to determine independent risk factors for abnormal cytology. Results The median age of the 109 participants was 31 years, the median CD4 count was 125/mm3, 66.3% had an abnormal Pap smear, the HR-HPV prevalence was 78.9% (Digene), the median HPV viral load was 181.1 RLU (HC2 positive samples only) and 78.4% had multiple genotypes. Among women with abnormal smears the most prevalent HR-HPV types were HPV types 16, 58 and 51, all with a prevalence of 28.5%. On univariate analysis HR-HPV, multiple HPV types and HPV viral load were significantly associated with the presence of low and high-grade SILs (LSIL/HSIL). The multivariate logistic regression showed that HPV viral load was associated with an increased odds of LSIL/HSIL, odds ratio of 10.7 (95% CI 2.0 – 57.7) for those that were HC2 positive and had a viral load of ≤ 181.1 RLU (the median HPV viral load), and 33.8 (95% CI 6.4 – 178.9) for those that were HC2 positive with a HPV viral load > 181.1 RLU. Conclusion Women initiating ARVs have a high prevalence of abnormal Pap smears and HR-HPV. Our results underscore the need for locally relevant, rigorous screening protocols for the increasing numbers of women accessing ARV therapy so that the benefits of ARVs are not partially offset by an excess risk in cervical cancer.
Resumo:
Psittacine beak and feather disease (PBFD) has a broad host range and is widespread in wild and captive psittacine populations in Asia, Africa, the Americas, Europe and Australasia. Beak and feather disease circovirus (BFDV) is the causative agent. BFDV has an ~2 kb single stranded circular DNA genome encoding just two proteins (Rep and CP). In this study we provide support for demarcation of BFDV strains by phylogenetic analysis of 65 complete genomes from databases and 22 new BFDV sequences isolated from infected psittacines in South Africa. We propose 94% genome-wide sequence identity as a strain demarcation threshold, with isolates sharing > 94% identity belonging to the same strain, and strain subtypes sharing> 98% identity. Currently, BFDV diversity falls within 14 strains, with five highly divergent isolates from budgerigars probably representing a new species of circovirus with three strains (budgerigar circovirus; BCV-A, -B and -C). The geographical distribution of BFDV and BCV strains is strongly linked to the international trade in exotic birds; strains with more than one host are generally located in the same geographical area. Lastly, we examined BFDV and BCV sequences for evidence of recombination, and determined that recombination had occurred in most BFDV and BCV strains. We established that there were two globally significant recombination hotspots in the viral genome: the first is along the entire intergenic region and the second is in the C-terminal portion of the CP ORF. The implications of our results for the taxonomy and classification of circoviruses are discussed. © 2011 SGM.