912 resultados para Producing
Resumo:
Rapid screening tests and an appreciation of the simple genetic control of Alternaria brown spot (ABS) susceptibility have existed for many years, and yet the application of this knowledge to commercial-scale breeding programs has been limited. Detached leaf assays were first demonstrated more than 40 years ago and reliable data suggesting a single gene determining susceptibility has been emerging for at least 20 years. However it is only recently that the requirement for genetic resistance in new hybrids has become a priority, following increased disease prevalence in Australian mandarin production areas previously considered too dry for the pathogen. Almost all of the high-fruit-quality parents developed so far by the Queensland-based breeding program are susceptible to ABS necessitating the screening of their progeny to avoid commercialisation of susceptible hybrids. This is done effectively and efficiently by spraying 3-6 month old hybrid seedlings with a spore suspension derived from a toxin-producing field isolate of Alternaria alternate, then incubating these seedlings in a cool room at 25°C and high humidity for 5 days. Susceptible seedlings show clear disease symptoms and are discarded. Analysis of observed and expected segregation ratios loosely support the hypothesis for a single dominant gene for susceptibility, but do not rule out the possibility of alternative genetic models. After implementing the routine screening for ABS resistance for three seasons we now have more than 20,000 hybrids growing in field progeny blocks that have been screened for resistance to the ABS disease.
Resumo:
The root-lesion nematodes (RLN) Pratylenchus thornei and P. neglectus are widely distributed in Australian grain producing regions and can reduce the yield of intolerant wheat cultivars by up to 65 , costing the industry ~123 M AUD/year. Consequently, researchers in the northern, southern and western regions have independently developed procedures to evaluate the resistance of cereal cultivars to RLN. To compare results, each of the three laboratories phenotyped a set of 26 and 36 cereal cultivars for relative resistance/susceptibility to P. thornei and P. neglectus respectively. The northern and southern regions also investigated the effects of planting time and experiment duration on RLN reproduction and cultivar ranking. Results show the genetic correlation between cultivars tested using the northern and southern procedures evaluating P. thornei resistance was 0.93. Genetic correlations between experiments using the same procedure, but with different planting times, were 0.99 for both northern and southern procedures. The genetic correlation between cultivars tested using the northern, southern and western procedures evaluating P. neglectus resistance ranged from 0.71 to 0.95. Genetic correlations between experiments using the same procedure but with different planting times ranged from 0.91 to 0.99. This study established that, even though experiments were conducted in different geographic locations and with different trial management practices, the diverse nematode resistance screening procedures ranked cultivars similarly. Consequently, RLN resistance data can be pooled across regions to provide national consensus ratings of cultivars.
Resumo:
AbstractObjectives Decision support tools (DSTs) for invasive species management have had limited success in producing convincing results and meeting users' expectations. The problems could be linked to the functional form of model which represents the dynamic relationship between the invasive species and crop yield loss in the DSTs. The objectives of this study were: a) to compile and review the models tested on field experiments and applied to DSTs; and b) to do an empirical evaluation of some popular models and alternatives. Design and methods This study surveyed the literature and documented strengths and weaknesses of the functional forms of yield loss models. Some widely used models (linear, relative yield and hyperbolic models) and two potentially useful models (the double-scaled and density-scaled models) were evaluated for a wide range of weed densities, maximum potential yield loss and maximum yield loss per weed. Results Popular functional forms include hyperbolic, sigmoid, linear, quadratic and inverse models. Many basic models were modified to account for the effect of important factors (weather, tillage and growth stage of crop at weed emergence) influencing weed–crop interaction and to improve prediction accuracy. This limited their applicability for use in DSTs as they became less generalized in nature and often were applicable to a much narrower range of conditions than would be encountered in the use of DSTs. These factors' effects could be better accounted by using other techniques. Among the model empirically assessed, the linear model is a very simple model which appears to work well at sparse weed densities, but it produces unrealistic behaviour at high densities. The relative-yield model exhibits expected behaviour at high densities and high levels of maximum yield loss per weed but probably underestimates yield loss at low to intermediate densities. The hyperbolic model demonstrated reasonable behaviour at lower weed densities, but produced biologically unreasonable behaviour at low rates of loss per weed and high yield loss at the maximum weed density. The density-scaled model is not sensitive to the yield loss at maximum weed density in terms of the number of weeds that will produce a certain proportion of that maximum yield loss. The double-scaled model appeared to produce more robust estimates of the impact of weeds under a wide range of conditions. Conclusions Previously tested functional forms exhibit problems for use in DSTs for crop yield loss modelling. Of the models evaluated, the double-scaled model exhibits desirable qualitative behaviour under most circumstances.
Resumo:
Trioxalatocobaltates of bivalent metals KM2+[Co(C2O4)3]·x H2O, with M2+ = Ba, Sr, Ca and Pb, have been prepared, characterized and their thermal behaviour studied. The compounds decompose to yield potassium carbonate, bivalent metal carbonate or oxide and cobalt oxide as final products. The formation of the final products of decomposition is influenced by the surrounding atmosphere. Bivalent metal cobaltites of the types KM2+CoO3 and M2+CoO3—x are not identified among the final products of decomposition. The study brings out the importance of the decomposition mode of the precursor in producing the desired end products.
Resumo:
Minimizing fungal infection is essential to the control of mycotoxin contamination of foods and feeds but many potential control methods are not without their own safety concerns for the consumers. Photodynamic inactivation is a novel light-based approach which offers a promising alternative to conventional methods for the control of mycotoxigenic fungi. This study describes the use of curcumin to inactivate spores of Aspergillus flavus, one of the major aflatoxin producing fungi in foods and feeds. Curcumin is a natural polyphenolic compound from the spice turmeric (Curcuma longa). In this study the plant has shown to be an effective photosensitiser when combined with visible light (420 nm). The experiment was conducted in in vitro and in vivo where A. flavus spores were treated with different photosensitiser concentration and light dose both in buffer solution and on maize kernels. Comparison of fungal load from treated and untreated samples was determined, and reductions of fungal spore counts of up to 3 log CFU ml−1 in suspension and 2 log CFU g−1 in maize kernels were obtained using optimal dye concentrations and light dose combinations. The results in this study indicate that curcumin-mediated photosensitization is a potentially effective method to decontaminate A. flavus spores in foods and feeds.
Resumo:
Wheat is at peak quality soon after harvest. Subsequently, diverse biota use wheat as a resource in storage, including insects and mycotoxin-producing fungi. Transportation networks for stored grain are crucial to food security and provide a model system for an analysis of the population structure, evolution, and dispersal of biota in networks. We evaluated the structure of rail networks for grain transport in the United States and Eastern Australia to identify the shortest paths for the anthropogenic dispersal of pests and mycotoxins, as well as the major sources, sinks, and bridges for movement. We found important differences in the risk profile in these two countries and identified priority control points for sampling, detection, and management. An understanding of these key locations and roles within the network is a new type of basic research result in postharvest science and will provide insights for the integrated pest management of high-risk subpopulations, such as pesticide-resistant insect pests.
Resumo:
With livestock manures being increasingly sought as alternatives to costly synthetic fertilisers, it is imperative that we understand and manage their associated greenhouse gas (GHG) emissions. Here we provide the first dedicated assessment into how the GHG emitting potential of various manures responds to the different stages of the manure management continuum (e.g., from feed pen surface vs stockpiled). The research is important from the perspective of manure application to agricultural soils. Manures studied included: manure from beef feedpen surfaces and stockpiles; poultry broiler litter (8-week batch); fresh and composted egg layer litter; and fresh and composted piggery litter. Gases assessed were methane (CH4) and nitrous oxide (N2O), the two principal agricultural GHGs. We employed proven protocols to determine the manures’ ultimate CH4 producing potential. We also devised a novel incubation experiment to elucidate their N2O emitting potential; a measure for which no established methods exist. We found lower CH4 potentials in manures from later stages in their management sequence compared with earlier stages, but only by a factor of 0.65×. Moreover, for the beef manures this decrease was not significant (P < 0.05). Nitrous oxide emission potential was significantly positively (P < 0.05) correlated with C/N ratios yet showed no obvious relationship with manure management stage. Indeed, N2O emissions from the composted egg manure were considerably (13×) and significantly (P < 0.05) higher than that of the fresh egg manure. Our study demonstrates that manures from all stages of the manure management continuum potentially entail significant GHG risk when applied to arable landscapes. Efforts to harness manure resources need to account for this.
Resumo:
The analysis of sequential data is required in many diverse areas such as telecommunications, stock market analysis, and bioinformatics. A basic problem related to the analysis of sequential data is the sequence segmentation problem. A sequence segmentation is a partition of the sequence into a number of non-overlapping segments that cover all data points, such that each segment is as homogeneous as possible. This problem can be solved optimally using a standard dynamic programming algorithm. In the first part of the thesis, we present a new approximation algorithm for the sequence segmentation problem. This algorithm has smaller running time than the optimal dynamic programming algorithm, while it has bounded approximation ratio. The basic idea is to divide the input sequence into subsequences, solve the problem optimally in each subsequence, and then appropriately combine the solutions to the subproblems into one final solution. In the second part of the thesis, we study alternative segmentation models that are devised to better fit the data. More specifically, we focus on clustered segmentations and segmentations with rearrangements. While in the standard segmentation of a multidimensional sequence all dimensions share the same segment boundaries, in a clustered segmentation the multidimensional sequence is segmented in such a way that dimensions are allowed to form clusters. Each cluster of dimensions is then segmented separately. We formally define the problem of clustered segmentations and we experimentally show that segmenting sequences using this segmentation model, leads to solutions with smaller error for the same model cost. Segmentation with rearrangements is a novel variation to the segmentation problem: in addition to partitioning the sequence we also seek to apply a limited amount of reordering, so that the overall representation error is minimized. We formulate the problem of segmentation with rearrangements and we show that it is an NP-hard problem to solve or even to approximate. We devise effective algorithms for the proposed problem, combining ideas from dynamic programming and outlier detection algorithms in sequences. In the final part of the thesis, we discuss the problem of aggregating results of segmentation algorithms on the same set of data points. In this case, we are interested in producing a partitioning of the data that agrees as much as possible with the input partitions. We show that this problem can be solved optimally in polynomial time using dynamic programming. Furthermore, we show that not all data points are candidates for segment boundaries in the optimal solution.
Resumo:
Historically, two-dimensional (2D) cell culture has been the preferred method of producing disease models in vitro. Recently, there has been a move away from 2D culture in favor of generating three-dimensional (3D) multicellular structures, which are thought to be more representative of the in vivo environment. This transition has brought with it an influx of technologies capable of producing these structures in various ways. However, it is becoming evident that many of these technologies do not perform well in automated in vitro drug discovery units. We believe that this is a result of their incompatibility with high-throughput screening (HTS). In this study, we review a number of technologies, which are currently available for producing in vitro 3D disease models. We assess their amenability with high-content screening and HTS and highlight our own work in attempting to address many of the practical problems that are hampering the successful deployment of 3D cell systems in mainstream research.
Resumo:
The metabolism of an organism consists of a network of biochemical reactions that transform small molecules, or metabolites, into others in order to produce energy and building blocks for essential macromolecules. The goal of metabolic flux analysis is to uncover the rates, or the fluxes, of those biochemical reactions. In a steady state, the sum of the fluxes that produce an internal metabolite is equal to the sum of the fluxes that consume the same molecule. Thus the steady state imposes linear balance constraints to the fluxes. In general, the balance constraints imposed by the steady state are not sufficient to uncover all the fluxes of a metabolic network. The fluxes through cycles and alternative pathways between the same source and target metabolites remain unknown. More information about the fluxes can be obtained from isotopic labelling experiments, where a cell population is fed with labelled nutrients, such as glucose that contains 13C atoms. Labels are then transferred by biochemical reactions to other metabolites. The relative abundances of different labelling patterns in internal metabolites depend on the fluxes of pathways producing them. Thus, the relative abundances of different labelling patterns contain information about the fluxes that cannot be uncovered from the balance constraints derived from the steady state. The field of research that estimates the fluxes utilizing the measured constraints to the relative abundances of different labelling patterns induced by 13C labelled nutrients is called 13C metabolic flux analysis. There exist two approaches of 13C metabolic flux analysis. In the optimization approach, a non-linear optimization task, where candidate fluxes are iteratively generated until they fit to the measured abundances of different labelling patterns, is constructed. In the direct approach, linear balance constraints given by the steady state are augmented with linear constraints derived from the abundances of different labelling patterns of metabolites. Thus, mathematically involved non-linear optimization methods that can get stuck to the local optima can be avoided. On the other hand, the direct approach may require more measurement data than the optimization approach to obtain the same flux information. Furthermore, the optimization framework can easily be applied regardless of the labelling measurement technology and with all network topologies. In this thesis we present a formal computational framework for direct 13C metabolic flux analysis. The aim of our study is to construct as many linear constraints to the fluxes from the 13C labelling measurements using only computational methods that avoid non-linear techniques and are independent from the type of measurement data, the labelling of external nutrients and the topology of the metabolic network. The presented framework is the first representative of the direct approach for 13C metabolic flux analysis that is free from restricting assumptions made about these parameters.In our framework, measurement data is first propagated from the measured metabolites to other metabolites. The propagation is facilitated by the flow analysis of metabolite fragments in the network. Then new linear constraints to the fluxes are derived from the propagated data by applying the techniques of linear algebra.Based on the results of the fragment flow analysis, we also present an experiment planning method that selects sets of metabolites whose relative abundances of different labelling patterns are most useful for 13C metabolic flux analysis. Furthermore, we give computational tools to process raw 13C labelling data produced by tandem mass spectrometry to a form suitable for 13C metabolic flux analysis.
Resumo:
Hydrazinium magnesium sulfate, (N2H5)2Mg(SO4)2, has been prepared by dissolving magnesium powder in a solution of ammonium sulfate in hydrazine hydrate, by the reaction of ammonium magnesium sulfate with hydrazine hydrate, and by the cocrystallisation of dihydrazinium sulfate and magnesium sulfate. The product has been characterized by chemical analysis and infrared spectra. Thermal analysis of (N2H5)2Mg(SO4)2 by TG and DTA show exothermic decomposition at 302°C giving Mg(N2H4)SO4 as an intermediate and an endother-mic decomposition at 504°C producing MgSO4.
Resumo:
Sirex woodwasp was detected in Queensland in 2009 and rapidly established in softwood plantations (Pinus radiata and P. taeda) in southern border regions. Biocontrol inoculations of Deladenus siricidicola began soon after, and adults were monitored to assess the success of the programme. Wasp size, sex ratios, emergence phenology and nematode parasitism rates were recorded, along with the assessment of wild-caught females. Patterns varied within and among seasons, but overall, P. taeda appeared to be a less suitable host than P. radiata, producing smaller adults, lower fat body content and fewer females. Sirex emerging from P. taeda also showed lower levels of nematode parasitism, possibly due to interactions with the more abundant blue-stain fungus in this host. Sirex adults generally emerged between November and March, with distinct peaks in January and March, separated by a marked drop in emergence in early February. Temperature provided the best correlate of seasonal emergence, with fortnights with higher mean minimum temperatures having higher numbers of Sirex emerging. This has implications for the anticipated northward spread of Sirex into sub-tropical coastal plantation regions. Following four seasons of inundative release of nematodes in Queensland, parasitism rates remain low and have resulted in only partial sterilization of infected females.
Resumo:
The National Energy Efficient Building Project (NEEBP) Phase One report, published in December 2014, investigated “process issues and systemic failures” in the administration of the energy performance requirements in the National Construction Code. It found that most stakeholders believed that under-compliance with these requirements is widespread across Australia, with similar issues being reported in all states and territories. The report found that many different factors were contributing to this outcome and, as a result, many recommendations were offered that together would be expected to remedy the systemic issues reported. To follow up on this Phase 1 report, three additional projects were commissioned as part of Phase 2 of the overall NEEBP project. This Report deals with the development and piloting of an Electronic Building Passport (EBP) tool – a project undertaken jointly by pitt&sherry and a team at the Queensland University of Technology (QUT) led by Dr Wendy Miller. The other Phase 2 projects cover audits of Class 1 buildings and issues relating to building alterations and additions. The passport concept aims to provide all stakeholders with (controlled) access to the key documentation and information that they need to verify the energy performance of buildings. This trial project deals with residential buildings but in principle could apply to any building type. Nine councils were recruited to help develop and test a pilot electronic building passport tool. The participation of these councils – across all states – enabled an assessment of the extent to which these councils are currently utilising documentation; to track the compliance of residential buildings with the energy performance requirements in the National Construction Code (NCC). Overall we found that none of the participating councils are currently compiling all of the energy performance-related documentation that would demonstrate code compliance. The key reasons for this include: a major lack of clarity on precisely what documentation should be collected; cost and budget pressures; low public/stakeholder demand for the documentation; and a pragmatic judgement that non-compliance with any regulated documentation requirements represents a relatively low risk for them. Some councils reported producing documentation, such as certificates of final completion, only on demand, for example. Only three of the nine council participants reported regularly conducting compliance assessments or audits utilising this documentation and/or inspections. Overall we formed the view that documentation and information tracking processes operating within the building standards and compliance system are not working to assure compliance with the Code’s energy performance requirements. In other words the Code, and its implementation under state and territory regulatory processes, is falling short as a ‘quality assurance’ system for consumers. As a result it is likely that the new housing stock is under-performing relative to policy expectations, consuming unnecessary amounts of energy, imposing unnecessarily high energy bills on occupants, and generating unnecessary greenhouse gas emissions. At the same time, Councils noted that the demand for documentation relating to building energy performance was low. All the participant councils in the EBP pilot agreed that documentation and information processes need to work more effectively if the potential regulatory and market drivers towards energy efficient homes are to be harnessed. These findings are fully consistent with the Phase 1 NEEBP report. It was also agreed that an EBP system could potentially play an important role in improving documentation and information processes. However, only one of the participant councils indicated that they might adopt such a system on a voluntary basis. The majority felt that such a system would only be taken up if it were: - A nationally agreed system, imposed as a mandatory requirement under state or national regulation; - Capable of being used by multiple parties including councils, private certifiers, building regulators, builders and energy assessors in particular; and - Fully integrated into their existing document management systems, or at least seamlessly compatible rather than a separate, unlinked tool. Further, we note that the value of an EBP in capturing statistical information relating to the energy performance of buildings would be much greater if an EBP were adopted on a nationally consistent basis. Councils were clear that a key impediment to the take up of an EBP system is that they are facing very considerable budget and staffing challenges. They report that they are often unable to meet all community demands from the resources available to them. Therefore they are unlikely to provide resources to support the roll out of an EBP system on a voluntary basis. Overall, we conclude from this pilot that the public good would be well served if the Australian, state and territory governments continued to develop and implement an Electronic Building Passport system in a cost-efficient and effective manner. This development should occur with detailed input from building regulators, the Australian Building Codes Board (ABCB), councils and private certifiers in the first instance. This report provides a suite of recommendations (Section 7.2) designed to advance the development and guide the implementation of a national EBP system.
Resumo:
This project has for the first time demonstrated the feasibility of hatchery production of jungle perch fingerlings. The research on jungle perch production has enabled a hatchery production manual with accompanying videos to be produced. This has given private commercial hatcheries the information needed to produce jungle perch fingerlings. Several hatcheries have already indicated an interest in producing jungle perch and will be assisted to do so in 2016. Currently jungle perch are not a permitted stocking species, so cannot be sold to fish stocking groups. However, hatcheries will be able to sell fingerlings to the aquarium trade or supply grow out facilities that could produce jungle perch for human consumption. Should jungle perch become a permitted species for stocking, this will provide hatcheries with a major new product option to sell to fish stocking groups. It would also benefit anglers by providing another iconic species for impoundment stocking programs. This could have flow-on benefits to regional economies through angler tourism. Should the pilot reintroductions of jungle perch into streams result in self-sustaining jungle perch populations, then there will be three restored jungle perch populations close to major population centres. This will create a new opportunity for anglers not normally able to target jungle perch. Since the majority of anglers who target jungle perch are catch and release fishers, angling is expected to have minimal impact on recovery of the populations. This project led to the development of a hatchery manual for jungle perch production and to a summary brochure. In late 2014 and in 2015 researchers were able to make the first ever releases of jungle perch fingerlings back into rivers and streams within their historical range.
Resumo:
This manual consists of written descriptions of jungle perch Kuhlia rupestris production and video material to demonstrate each of the key production steps. Video links are at the end of each major written section in the document. To activate the link use ctrl click. The videos enhance the instructive ability of this manual. The keys to producing jungle perch are: maintaining broodstock in freshwater or low salinity water less than 5 ppt spawning fish in full seawater at 28C incubating eggs in full seawater. Salinities must not be less than 32 ppt ensuring that first feed jungle perch larvae have an adequate supply of copepod nauplii rearing larvae in full seawater under bright light use of gentle aeration in tanks postponing spawns until adequate densities of copepod nauplii are present in ponds sustaining copepod blooms in ponds for at least 20 days avoiding use of paddlewheels in ponds supplementary feeding with Artemia salina and weaning diets from 20 days after hatch harvesting of fingerlings or fry after they are 25-30 mm in length (50 to 60 days post hatch) covering tanks of fingerlings with 5 mm mesh and submerging freshwater inlets to prevent jumping.