925 resultados para Large-group methods
Resumo:
The quantification and characterisation of soil phosphorus (P) is of agricultural and environmental importance and different extraction methods are widely used to asses the bioavailability of P and to characterize soil P reserves. However, the large variety of extractants, pre-treatments and sample preparation procedures complicate the comparison of published results. In order to improve our understanding of the behaviour and cycling of P in soil, it is crucial to know the scientific relevance of the methods used for various purposes. The knowledge of the factors affecting the analytical outcome is a prerequisite for justified interpretation of the results. The aim of this thesis was to study the effects of sample preparation procedures on soil P and to determine the dependence of the recovered P pool on the chemical nature of extractants. Sampling is a critical step in soil testing and sampling strategy is dependent on the land-use history and the purpose of sampling. This study revealed that pre-treatments changed soil properties and air-drying was found to affect soil P, particularly extractable organic P, by disrupting organic matter. This was evidenced by an increase in the water-extractable small-sized (<0.2 µm) P that, at least partly, took place at the expense of the large-sized (>0.2 µm) P. However, freezing induced only insignificant changes and thus, freezing can be taken to be a suitable method for storing soils from the boreal zone that naturally undergo periodic freezing. The results demonstrated that chemical nature of the extractant affects its sensitivity to detect changes in soil P solubility. Buffered extractants obscured the alterations in P solubility induced by pH changes; however, water extraction, though sensitive to physicochemical changes, can be used to reveal short term changes in soil P solubility. As for the organic P, the analysis was found to be sensitive to the sample preparation procedures: filtering may leave a large proportion of extractable organic P undetected, whereas the outcome of centrifugation was found to be affected by the ionic strength of the extractant. Widely used sequential fractionation procedures proved to be able to detect land-use -derived differences in the distribution of P among fractions of different solubilities. However, interpretation of the results from extraction experiments requires better understanding of the biogeochemical function of the recovered P fraction in the P cycle in differently managed soils under dissimilar climatic conditions.
Resumo:
B. cereus is a gram-positive bacterium that possesses two different forms of life:the large, rod-shaped cells (ca. 0.002 mm by 0.004 mm) that are able to propagate and the small (0.001 mm), oval shaped spores. The spores can survive in almost any environment for up to centuries without nourishment or water. They are insensitive towards most agents that normally kill bacteria: heating up to several hours at 90 ºC, radiation, disinfectants and extreme alkaline (≥ pH 13) and acid (≤ pH 1) environment. The spores are highly hydrophobic and therefore make them tend to stick to all kinds of surfaces, steel, plastics and live cells. In favorable conditions the spores of B. cereus may germinate into vegetative cells capable of producing food poisoning toxins. The toxins can be heat-labile protein formed after ingestion of the contaminated food, inside the gastrointestinal tract (diarrhoeal toxins), or heat stable peptides formed in the food (emesis causing toxin, cereulide). Cereulide cannot be inactivated in foods by cooking or any other procedure applicable on food. Cereulide in consumed food causes serious illness in human, even fatalities. In this thesis, B. cereus strains originating from different kinds of foods and environments and 8 different countries were inspected for their capability of forming cereulide. Of the 1041 isolates from soil, animal feed, water, air, used bedding, grass, dung and equipment only 1.2 % were capable of producing cereulide, whereas of the 144 isolates originating from foods 24 % were cereulide producers. Cereulide was detected by two methods: by its toxicity towards mammalian cells (sperm assay) and by its peculiar chemical structure using liquid-chromatograph-mass spectrometry equipment. B. cereus is known as one of the most frequent bacteria occurring in food. Most foods contain more than one kind of B. cereus. When randomly selected 100 isolates of B. cereus from commercial infant foods (dry formulas) were tested, 11% of these produced cereulide. Considering a frequent content of 103 to 104 cfu (colony forming units) of B. cereus per gram of infant food formula (dry), it appears likely that most servings (200 ml, 30 g of the powder reconstituted with water) may contain cereulide producers. When a reconstituted infant formula was inoculated with >105 cfu of cereulide producing B. cereus per ml and left at room temperature, cereulide accumulated to food poisoning levels (> 0.1 mg of cereulide per serving) within 24 hours. Paradoxically, the amount of cereulide (per g of food) increased 10 to 50 fold when the food was diluted 4 - 15 fold with water. The amount of the produced cereulide strongly depended on the composition of the formula: most toxin was formed in formulas with cereals mixed with milk, and least toxin in formulas based on milk only. In spite of the aggressive cleaning practices executed by the modern dairy industry, certain genotypes of B. cereus appear to colonise the silos tanks. In this thesis four strategies to explain their survival of their spores in dairy silos were identified. First, high survival (log 15 min kill ≤ 1.5) in the hot alkaline (pH >13) wash liquid, used at the dairies for cleaning-in-place. Second, efficient adherence of the spores to stainless steel from cold water. Third, a cereulide producing group with spores characterized by slow germination in rich medium and well preserved viability when exposed to heating at 90 ºC. Fourth, spores capable of germinating at 8 ºC and possessing the psychrotolerance gene, cspA. There were indications that spores highly resistant to hot 1% sodium hydroxide may be effectively inactivated by hot 0.9% nitric acid. Eight out of the 14 dairy silo tank isolates possessing hot alkali resistant spores were capable of germinating and forming biofilm in whole milk, not previously reported for B. cereus. In this thesis it was shown that cereulide producing B. cereus was capable of inhibiting the growth of cereulide non-producing B. cereus occurring in the same food. This phenomenon, called antagonism, has long been known to exist between B. cereus and other microbial species, e.g. various species of Bacillus, gram-negative bacteria and plant pathogenic fungi. In this thesis intra-species antagonism of B. cereus was shown for the first time. This brother-killing did not depend on the cereulide molecule, also some of the cereulide non-producers were potent antagonists. Interestingly, the antagonistic clades were most frequently found in isolates from food implicated with human illness. The antagonistic property was therefore proposed in this thesis as a novel virulence factor that increases the human morbidity of the species B. cereus, in particular of the cereulide producers.
Resumo:
Historical stocking methods of continuous, season-long grazing of pastures with little account of growing conditions have caused some degradation within grazed landscapes in northern Australia. Alternative stocking methods have been implemented to address this degradation and raise the productivity and profitability of the principal livestock, cattle. Because information comparing stocking methods is limited, an evaluation was undertaken to quantify the effects of stocking methods on pastures, soils and grazing capacity. The approach was to monitor existing stocking methods on nine commercial beef properties in north and south Queensland. Environments included native and exotic pastures and eucalypt (lighter soil) and brigalow (heavier soil) land types. Breeding and growing cattle were grazed under each method. The owners/managers, formally trained in pasture and grazing management, made all management decisions affecting the study sites. Three stocking methods were compared: continuous (with rest), extensive rotation and intensive rotation (commonly referred to as 'cell grazing'). There were two or three stocking methods examined on each property: in total 21 methods (seven continuous, six extensive rotations and eight intensive rotations) were monitored over 74 paddocks, between 2006 and 2009. Pasture and soil surface measurements were made in the autumns of 2006, 2007 and 2009, while the paddock grazing was analysed from property records for the period from 2006 to 2009. The first 2 years had drought conditions (rainfall average 3.4 decile) but were followed by 2 years of above-average rainfall. There were no consistent differences between stocking methods across all sites over the 4 years for herbage mass, plant species composition, total and litter cover, or landscape function analysis (LFA) indices. There were large responses to rainfall in the last 2 years with mean herbage mass in the autumn increasing from 1970 kg DM ha(-1) in 2006-07 to 3830 kg DM ha(-1) in 2009. Over the same period, ground and litter cover and LFA indices increased. Across all sites and 4 years, mean grazing capacity was similar for the three stocking methods. There were, however, significant differences in grazing capacity between stocking methods at four sites but these differences were not consistent between stocking methods or sites. Both the continuous and intensive rotation methods supported the highest average annual grazing capacity at different sites. The results suggest that cattle producers can obtain similar ecological responses and carry similar numbers of livestock under any of the three stocking methods.
Resumo:
The in vivo faecal egg count reduction test (FECRT) is the most commonly used test to detect anthelmintic resistance (AR) in gastrointestinal nematodes (GIN) of ruminants in pasture based systems. However, there are several variations on the method, some more appropriate than others in specific circumstances. While in some cases labour and time can be saved by just collecting post-drench faecal worm egg counts (FEC) of treatment groups with controls, or pre- and post-drench FEC of a treatment group with no controls, there are circumstances when pre- and post-drench FEC of an untreated control group as well as from the treatment groups are necessary. Computer simulation techniques were used to determine the most appropriate of several methods for calculating AR when there is continuing larval development during the testing period, as often occurs when anthelmintic treatments against genera of GIN with high biotic potential or high re-infection rates, such as Haemonchus contortus of sheep and Cooperia punctata of cattle, are less than 100% efficacious. Three field FECRT experimental designs were investigated: (I) post-drench FEC of treatment and controls groups, (II) pre- and post-drench FEC of a treatment group only and (III) pre- and post-drench FEC of treatment and control groups. To investigate the performance of methods of indicating AR for each of these designs, simulated animal FEC were generated from negative binominal distributions with subsequent sampling from the binomial distributions to account for drench effect, with varying parameters for worm burden, larval development and drench resistance. Calculations of percent reductions and confidence limits were based on those of the Standing Committee for Agriculture (SCA) guidelines. For the two field methods with pre-drench FEC, confidence limits were also determined from cumulative inverse Beta distributions of FEC, for eggs per gram (epg) and the number of eggs counted at detection levels of 50 and 25. Two rules for determining AR: (1) %reduction (%R) < 95% and lower confidence limit <90%; and (2) upper confidence limit <95%, were also assessed. For each combination of worm burden, larval development and drench resistance parameters, 1000 simulations were run to determine the number of times the theoretical percent reduction fell within the estimated confidence limits and the number of times resistance would have been declared. When continuing larval development occurs during the testing period of the FECRT, the simulations showed AR should be calculated from pre- and post-drench worm egg counts of an untreated control group as well as from the treatment group. If the widely used resistance rule 1 is used to assess resistance, rule 2 should also be applied, especially when %R is in the range 90 to 95% and resistance is suspected.
Resumo:
The treatment of large segmental bone defects remains a significant clinical challenge. Due to limitations surrounding the use of bone grafts, tissue-engineered constructs for the repair of large bone defects could offer an alternative. Before translation of any newly developed tissue engineering (TE) approach to the clinic, efficacy of the treatment must be shown in a validated preclinical large animal model. Currently, biomechanical testing, histology, and microcomputed tomography are performed to assess the quality and quantity of the regenerated bone. However, in vivo monitoring of the progression of healing is seldom performed, which could reveal important information regarding time to restoration of mechanical function and acceleration of regeneration. Furthermore, since the mechanical environment is known to influence bone regeneration, and limb loading of the animals can poorly be controlled, characterizing activity and load history could provide the ability to explain variability in the acquired data sets and potentially outliers based on abnormal loading. Many approaches have been devised to monitor the progression of healing and characterize the mechanical environment in fracture healing studies. In this article, we review previous methods and share results of recent work of our group toward developing and implementing a comprehensive biomechanical monitoring system to study bone regeneration in preclinical TE studies.
Resumo:
We have used the density matrix renormalization group (DMRG) method to study the linear and nonlinear optical responses of first generation nitrogen based dendrimers with donor acceptor groups. We have employed Pariser–Parr–Pople Hamiltonian to model the interacting pi electrons in these systems. Within the DMRG method we have used an innovative scheme to target excited states with large transition dipole to the ground state. This method reproduces exact optical gaps and polarization in systems where exact diagonalization of the Hamiltonian is possible. We have used a correction vector method which tacitly takes into account the contribution of all excited states, to obtain the ground state polarizibility, first hyperpolarizibility, and two photon absorption cross sections. We find that the lowest optical excitations as well as the lowest excited triplet states are localized. It is interesting to note that the first hyperpolarizibility saturates more rapidly with system size compared to linear polarizibility unlike that of linear polyenes.
Resumo:
To break the yield ceiling of rice production, a super rice project was developed in 1996 to breed rice varieties with super high yield. A two-year experiment was conducted to evaluate yield and nitrogen (N)-use response of super rice to different planting methods in the single cropping season. A total of 17 rice varieties, including 13 super rice and four non-super checks (CK), were grown under three N levels [0 (N0), 150 (N150), and 225 (N225) kg ha−1] and two planting methods [transplanting (TP) and direct-seeding in wet conditions (WDS)]. Grain yield under WDS (7.69 t ha−1) was generally lower than TP (8.58 t ha−1). However, grain yield under different planting methods was affected by N rates as well as variety groups. In both years, there was no difference in grain yield between super and CK varieties at N150, irrespective of planting methods. However, grain yield difference was dramatic in japonica groups at N225, that is, there was an 11.3% and 14.1% average increase in super rice than in CK varieties in WDS and TP, respectively. This suggests that high N input contributes to narrowing the yield gap in super rice varieties, which also indicates that super rice was bred for high fertility conditions. In the japonica group, more N was accumulated in super rice than in CK at N225, but no difference was found between super and CK varieties at N0 and N150. Similar results were also found for N agronomic efficiency. The results suggest that super rice varieties have an advantage for N-use efficiency when high N is applied. The response of super rice was greater under TP than under WDS. The results suggest that the need to further improve agronomic and other management practices to achieve high yield and N-use efficiency for super rice varieties in WDS.
Resumo:
The commodity plastics that are used in our everyday lives are based on polyolefin resins and they find wide variety of applications in several areas. Most of the production is carried out in catalyzed low pressure processes. As a consequence polymerization of ethene and α-olefins has been one of the focus areas for catalyst research both in industry and academia. Enormous amount of effort have been dedicated to fine tune the processes and to obtain better control of the polymerization and to produce tailored polymer structures The literature review of the thesis concentrates on the use of Group IV metal complexes as catalysts for polymerization of ethene and branched α-olefins. More precisely the review is focused on the use of complexes bearing [O,O] and [O,N] type ligands which have gained considerable interest. Effects of the ligand framework as well as mechanical and fluxional behaviour of the complexes are discussed. The experimental part consists mainly of development of new Group IV metal complexes bearing [O,O] and [O,N] ligands and their use as catalysts precursors in ethene polymerization. Part of the experimental work deals with usage of high-throughput techniques in tailoring properties of new polymer materials which are synthesized using Group IV complexes as catalysts. It is known that the by changing the steric and electronic properties of the ligand framework it is possible to fine tune the catalyst and to gain control over the polymerization reaction. This is why in this thesis the complex structures were designed so that the ligand frameworks could be fairly easily modified. All together 14 complexes were synthesised and used as catalysts in ethene polymerizations. It was found that the ligand framework did have an impact within the studied catalyst families. The activities of the catalysts were affected by the changes in complex structure and also effects on the produced polymers were observed: molecular weights and molecular weight distributions were depended on the used catalyst structure. Some catalysts also produced bi- or multi-modal polymers. During last decade high-throughput techniques developed in pharmaceutical industries have been adopted into polyolefin research in order to speed-up and optimize the catalyst candidates. These methods can now be regarded as established method suitable for both academia and industry alike. These high-throughput techniques were used in tailoring poly(4-methyl-1-pentene) polymers which were synthesized using Group IV metal complexes as catalysts. This work done in this thesis represents the first successful example where the high-throughput synthesis techniques are combined with high-throughput mechanical testing techniques to speed-up the discovery process for new polymer materials.
Resumo:
Objectives Hematoma quality (especially the fibrin matrix) plays an important role in the bone healing process. Here, we investigated the effect of interleukin-1 beta (IL-1β) on fibrin clot formation from platelet-poor plasma (PPP). Methods Five-milliliter of rat whole-blood samples were collected from the hepatic portal vein. All blood samples were firstly standardized via a thrombelastograph (TEG), blood cell count, and the measurement of fibrinogen concentration. PPP was prepared by collecting the top two-fifths of the plasma after centrifugation under 400 × g for 10min at 20°C. The effects of IL-1β cytokines on artificial fibrin clot formation from PPP solutions were determined by scanning electronic microscopy (SEM), confocal microscopy (CM), turbidity, and clot lysis assays. Results The lag time for protofibril formation was markedly shortened in the IL-1β treatment groups (243.8 ± 76.85 in the 50 pg/mL of IL-1β and 97.5 ± 19.36 in the 500 pg/mL of IL-1β) compared to the control group without IL-1β (543.8 ± 205.8). Maximal turbidity was observed in the control group. IL-1β (500 pg/mL) treatment significantly decreased fiber diameters resulting in smaller pore sizes and increased density of the fibrin clot structure formed from PPP (P < 0.05). The clot lysis assay revealed that 500 pg/mL IL-1β induced a lower susceptibility to dissolution due to the formation of thinner and denser fibers. Conclusion IL-1β can significantly influence PPP fibrin clot structure, which may affect the early bone healing process.
Resumo:
Ilmasto vaikuttaa ekologisiin prosesseihin eri tasoilla. Suuren mittakaavan ilmastoprosessit, yhdessä ilmakehän ja valtamerien kanssa, säätelevät paikallisia sääilmiöitä suurilla alueilla (mantereista pallopuoliskoihin). Tämä väistöskirja pyrkii selittämään kuinka suuren mittakaavan ilmasto on vaikuttanut tiettyihin ekologisiin prosesseihin pohjoisella havumetsäalueella. Valitut prosessit olivat puiden vuosilustojen kasvu, metsäpalojen esiintyminen ja vuoristomäntykovakuoriaisen aiheuttamat puukuolemat. Suuren mittakaavan ilmaston löydettiin vaikuttaneen näiden prosessien esiintymistiheyteen, kestoon ja levinneisyyteen keskeisten sään muuttujien välityksellä hyvin laajoilla alueilla. Tutkituilla prosesseilla oli vahva yhteys laajan mittakaavan ilmastoon. Yhteys on kuitenkin ollut hyvin dynaaminen ja muuttunut 1900-luvulla ilmastonmuutoksen aiheuttaessa muutoksia suuren mittakaavan ja alueellisten ilmastoprosessien välisiin sisäisiin suhteisiin.
Resumo:
A 4-degree-of-freedom single-input system and a 3-degree-of-freedom multi-input system are solved by the Coates', modified Coates' and Chan-Mai flowgraph methods. It is concluded that the Chan-Mai flowgraph method is superior to other flowgraph methods in such cases.
Resumo:
Microarrays are high throughput biological assays that allow the screening of thousands of genes for their expression. The main idea behind microarrays is to compute for each gene a unique signal that is directly proportional to the quantity of mRNA that was hybridized on the chip. A large number of steps and errors associated with each step make the generated expression signal noisy. As a result, microarray data need to be carefully pre-processed before their analysis can be assumed to lead to reliable and biologically relevant conclusions. This thesis focuses on developing methods for improving gene signal and further utilizing this improved signal for higher level analysis. To achieve this, first, approaches for designing microarray experiments using various optimality criteria, considering both biological and technical replicates, are described. A carefully designed experiment leads to signal with low noise, as the effect of unwanted variations is minimized and the precision of the estimates of the parameters of interest are maximized. Second, a system for improving the gene signal by using three scans at varying scanner sensitivities is developed. A novel Bayesian latent intensity model is then applied on these three sets of expression values, corresponding to the three scans, to estimate the suitably calibrated true signal of genes. Third, a novel image segmentation approach that segregates the fluorescent signal from the undesired noise is developed using an additional dye, SYBR green RNA II. This technique helped in identifying signal only with respect to the hybridized DNA, and signal corresponding to dust, scratch, spilling of dye, and other noises, are avoided. Fourth, an integrated statistical model is developed, where signal correction, systematic array effects, dye effects, and differential expression, are modelled jointly as opposed to a sequential application of several methods of analysis. The methods described in here have been tested only for cDNA microarrays, but can also, with some modifications, be applied to other high-throughput technologies. Keywords: High-throughput technology, microarray, cDNA, multiple scans, Bayesian hierarchical models, image analysis, experimental design, MCMC, WinBUGS.
Resumo:
Bacteria play an important role in many ecological systems. The molecular characterization of bacteria using either cultivation-dependent or cultivation-independent methods reveals the large scale of bacterial diversity in natural communities, and the vastness of subpopulations within a species or genus. Understanding how bacterial diversity varies across different environments and also within populations should provide insights into many important questions of bacterial evolution and population dynamics. This thesis presents novel statistical methods for analyzing bacterial diversity using widely employed molecular fingerprinting techniques. The first objective of this thesis was to develop Bayesian clustering models to identify bacterial population structures. Bacterial isolates were identified using multilous sequence typing (MLST), and Bayesian clustering models were used to explore the evolutionary relationships among isolates. Our method involves the inference of genetic population structures via an unsupervised clustering framework where the dependence between loci is represented using graphical models. The population dynamics that generate such a population stratification were investigated using a stochastic model, in which homologous recombination between subpopulations can be quantified within a gene flow network. The second part of the thesis focuses on cluster analysis of community compositional data produced by two different cultivation-independent analyses: terminal restriction fragment length polymorphism (T-RFLP) analysis, and fatty acid methyl ester (FAME) analysis. The cluster analysis aims to group bacterial communities that are similar in composition, which is an important step for understanding the overall influences of environmental and ecological perturbations on bacterial diversity. A common feature of T-RFLP and FAME data is zero-inflation, which indicates that the observation of a zero value is much more frequent than would be expected, for example, from a Poisson distribution in the discrete case, or a Gaussian distribution in the continuous case. We provided two strategies for modeling zero-inflation in the clustering framework, which were validated by both synthetic and empirical complex data sets. We show in the thesis that our model that takes into account dependencies between loci in MLST data can produce better clustering results than those methods which assume independent loci. Furthermore, computer algorithms that are efficient in analyzing large scale data were adopted for meeting the increasing computational need. Our method that detects homologous recombination in subpopulations may provide a theoretical criterion for defining bacterial species. The clustering of bacterial community data include T-RFLP and FAME provides an initial effort for discovering the evolutionary dynamics that structure and maintain bacterial diversity in the natural environment.
Resumo:
Metabolism is the cellular subsystem responsible for generation of energy from nutrients and production of building blocks for larger macromolecules. Computational and statistical modeling of metabolism is vital to many disciplines including bioengineering, the study of diseases, drug target identification, and understanding the evolution of metabolism. In this thesis, we propose efficient computational methods for metabolic modeling. The techniques presented are targeted particularly at the analysis of large metabolic models encompassing the whole metabolism of one or several organisms. We concentrate on three major themes of metabolic modeling: metabolic pathway analysis, metabolic reconstruction and the study of evolution of metabolism. In the first part of this thesis, we study metabolic pathway analysis. We propose a novel modeling framework called gapless modeling to study biochemically viable metabolic networks and pathways. In addition, we investigate the utilization of atom-level information on metabolism to improve the quality of pathway analyses. We describe efficient algorithms for discovering both gapless and atom-level metabolic pathways, and conduct experiments with large-scale metabolic networks. The presented gapless approach offers a compromise in terms of complexity and feasibility between the previous graph-theoretic and stoichiometric approaches to metabolic modeling. Gapless pathway analysis shows that microbial metabolic networks are not as robust to random damage as suggested by previous studies. Furthermore the amino acid biosynthesis pathways of the fungal species Trichoderma reesei discovered from atom-level data are shown to closely correspond to those of Saccharomyces cerevisiae. In the second part, we propose computational methods for metabolic reconstruction in the gapless modeling framework. We study the task of reconstructing a metabolic network that does not suffer from connectivity problems. Such problems often limit the usability of reconstructed models, and typically require a significant amount of manual postprocessing. We formulate gapless metabolic reconstruction as an optimization problem and propose an efficient divide-and-conquer strategy to solve it with real-world instances. We also describe computational techniques for solving problems stemming from ambiguities in metabolite naming. These techniques have been implemented in a web-based sofware ReMatch intended for reconstruction of models for 13C metabolic flux analysis. In the third part, we extend our scope from single to multiple metabolic networks and propose an algorithm for inferring gapless metabolic networks of ancestral species from phylogenetic data. Experimenting with 16 fungal species, we show that the method is able to generate results that are easily interpretable and that provide hypotheses about the evolution of metabolism.
Resumo:
This paper presents a method of designing a minimax filter in the presence of large plant uncertainties and constraints on the mean squared values of the estimates. The minimax filtering problem is reformulated in the framework of a deterministic optimal control problem and the method of solution employed, invokes the matrix Minimum Principle. The constrained linear filter and its relation to singular control problems has been illustrated. For the class of problems considered here it is shown that the filter can he constrained separately after carrying out the mini maximization. Numorieal examples are presented to illustrate the results.