877 resultados para Individual-based modeling
Resumo:
The implementation of three-phase sinusoidal pulse-width-modulated inverter control strategy using microprocessor is discussed in this paper. To save CPU time, the DMA technique is used for transferring the switching pattern from memory to the pulse amplifier and isolation circuits of individual thyristors in the inverter bridge. The method of controlling both voltage and frequency is discussed here.
Resumo:
High-throughput techniques are necessary to efficiently screen potential lignocellulosic feedstocks for the production of renewable fuels, chemicals, and bio-based materials, thereby reducing experimental time and expense while supplanting tedious, destructive methods. The ratio of lignin syringyl (S) to guaiacyl (G) monomers has been routinely quantified as a way to probe biomass recalcitrance. Mid-infrared and Raman spectroscopy have been demonstrated to produce robust partial least squares models for the prediction of lignin S/G ratios in a diverse group of Acacia and eucalypt trees. The most accurate Raman model has now been used to predict the S/G ratio from 269 unknown Acacia and eucalypt feedstocks. This study demonstrates the application of a partial least squares model composed of Raman spectral data and lignin S/G ratios measured using pyrolysis/molecular beam mass spectrometry (pyMBMS) for the prediction of S/G ratios in an unknown data set. The predicted S/G ratios calculated by the model were averaged according to plant species, and the means were not found to differ from the pyMBMS ratios when evaluating the mean values of each method within the 95 % confidence interval. Pairwise comparisons within each data set were employed to assess statistical differences between each biomass species. While some pairwise appraisals failed to differentiate between species, Acacias, in both data sets, clearly display significant differences in their S/G composition which distinguish them from eucalypts. This research shows the power of using Raman spectroscopy to supplant tedious, destructive methods for the evaluation of the lignin S/G ratio of diverse plant biomass materials.
Resumo:
Whether a statistician wants to complement a probability model for observed data with a prior distribution and carry out fully probabilistic inference, or base the inference only on the likelihood function, may be a fundamental question in theory, but in practice it may well be of less importance if the likelihood contains much more information than the prior. Maximum likelihood inference can be justified as a Gaussian approximation at the posterior mode, using flat priors. However, in situations where parametric assumptions in standard statistical models would be too rigid, more flexible model formulation, combined with fully probabilistic inference, can be achieved using hierarchical Bayesian parametrization. This work includes five articles, all of which apply probability modeling under various problems involving incomplete observation. Three of the papers apply maximum likelihood estimation and two of them hierarchical Bayesian modeling. Because maximum likelihood may be presented as a special case of Bayesian inference, but not the other way round, in the introductory part of this work we present a framework for probability-based inference using only Bayesian concepts. We also re-derive some results presented in the original articles using the toolbox equipped herein, to show that they are also justifiable under this more general framework. Here the assumption of exchangeability and de Finetti's representation theorem are applied repeatedly for justifying the use of standard parametric probability models with conditionally independent likelihood contributions. It is argued that this same reasoning can be applied also under sampling from a finite population. The main emphasis here is in probability-based inference under incomplete observation due to study design. This is illustrated using a generic two-phase cohort sampling design as an example. The alternative approaches presented for analysis of such a design are full likelihood, which utilizes all observed information, and conditional likelihood, which is restricted to a completely observed set, conditioning on the rule that generated that set. Conditional likelihood inference is also applied for a joint analysis of prevalence and incidence data, a situation subject to both left censoring and left truncation. Other topics covered are model uncertainty and causal inference using posterior predictive distributions. We formulate a non-parametric monotonic regression model for one or more covariates and a Bayesian estimation procedure, and apply the model in the context of optimal sequential treatment regimes, demonstrating that inference based on posterior predictive distributions is feasible also in this case.
Resumo:
Probiotic supplements are single or mixed strain cultures of live microorganisms that benefit the host by improving the properties of the indigenous microflora (Seo et al 2010). In a pilot study at the University of Queensland, Norton et al (2008) found that Bacillus amyloliquefaciens Strain H57 (H57), primarily investigated as an inoculum to make high-quality hay, improved feed intake and nitrogen utilisation over several weeks in pregnant ewes. The purpose of the following study was to further challenge the potential of H57 -to show it survives the steam-pelleting process, and that it improves the performance of ewes fed pellets based on an agro-industrial by-product with a reputation for poor palatability, palm kernel meal (PKM), (McNeill 2013). Thirty-two first-parity White Dorper ewes (day 37 of pregnancy, mean liveweight = 47.3 kg, mean age = 15 months) were inducted into individual pens in the animal house at the University of Queensland, Gatton. They were adjusted onto PKM-based pellets (g/kg drymatter (DM): PKM, 408; sorghum, 430; chick pea hulls, 103; minerals and vitamins; Crude protein, 128; ME: 11.1MJ/kg DM) until day 89 of pregnancy and thereafter fed a predominately pelleted diet incorporating with or without H57 spores (10 9 colony forming units (cfu)/kg pellet, as fed), plus 100g/ewe/day oaten chaff, until day 7 of lactation. From day 7 to 20 of lactation the pelleted component of the diet was steadily reduced to be replaced by a 50:50 mix of lucerne: oaten chaff, fed ad libitum, plus 100g/ewe/day of ground sorghum grain with or without H57 (10 9 cfu/ewe/day). The period of adjustment in pregnancy (day 37-89) extended beyond expectations due to some evidence of mild ruminal acidosis after some initially high intakes that were followed by low intakes. During that time the diet was modified, in an attempt to improve palatability, by the addition of oaten chaff and the removal of an acidifying agent (NH4Cl) that was added initially to reduce the risk of urinary calculi. Eight ewes were removed due to inappetence, leaving 24 ewes to start the trial at day 90 of pregnancy. From day 90 of pregnancy until day 63 of lactation, liveweights of the ewes and their lambs were determined weekly and at parturition. Feed intakes of the ewes were determined weekly. Once lambing began, 1 ewe was removed as it gave birth to twin lambs (whereas the rest gave birth to a single lamb), 4 due to the loss of their lambs (2 to dystocia), and 1 due to copper toxicity. The PKM pellets were suspected to be the cause of the copper toxicity and so were removed in early lactation. Hence, the final statistical analysis using STATISTICA 8 (Repeated measures ANOVA for feed intake, One-way ANOVA for liveweight change and birth weight) was completed on 23 ewes for the pregnancy period (n = 11 fed H57; n = 12 control), and 18 ewes or lambs for the lactation period (n = 8 fed H57; n = 10 control). From day 90 of pregnancy until parturition the H57 supplemented ewes ate 17 more DM (g/day: 1041 vs 889, sed = 42.4, P = 0.04) and gained more liveweight (g/day: 193 vs 24.0, sed = 25.4, P = 0.0002), but produced lambs with a similar birthweight (kg: 4.18 vs 3.99, sed = 0.19, P = 0.54). Over the 63 days of lactation the H57 ewes ate similar amounts of DM but grew slower than the control ewes (g/day: 1.5 vs 97.0, sed = 21.7, P = 0.012). The lambs of the H57 ewes grew faster than those of the control ewes for the first 21 days of lactation (g/day: 356 vs 265, sed = 16.5, P = 0.006). These data support the findings of Norton et al (2008) and Kritas et al (2006) that certain Bacillus spp. supplements can improve the performance of pregnant and lactating ewes. In the current study we particularly highlighted the capacity of H57 to stimulate immature ewes to continue to grow maternal tissue through pregnancy, possibly through an enhanced appetite, which appeared then to stimulate a greater capacity to partition nutrients to their lambs through milk, at least for the first few weeks of lactation, a critical time for optimising lamb survival. To conclude, H57 can survive the steam pelleting process to improve feed intake and maternal liveweight gain in late pregnancy, and performance in early lactation, of first-parity ewes fed a diet based on PKM.
Resumo:
The future use of genetically modified (GM) plants in food, feed and biomass production requires a careful consideration of possible risks related to the unintended spread of trangenes into new habitats. This may occur via introgression of the transgene to conventional genotypes, due to cross-pollination, and via the invasion of GM plants to new habitats. Assessment of possible environmental impacts of GM plants requires estimation of the level of gene flow from a GM population. Furthermore, management measures for reducing gene flow from GM populations are needed in order to prevent possible unwanted effects of transgenes on ecosystems. This work develops modeling tools for estimating gene flow from GM plant populations in boreal environments and for investigating the mechanisms of the gene flow process. To describe spatial dimensions of the gene flow, dispersal models are developed for the local and regional scale spread of pollen grains and seeds, with special emphasis on wind dispersal. This study provides tools for describing cross-pollination between GM and conventional populations and for estimating the levels of transgenic contamination of the conventional crops. For perennial populations, a modeling framework describing the dynamics of plants and genotypes is developed, in order to estimate the gene flow process over a sequence of years. The dispersal of airborne pollen and seeds cannot be easily controlled, and small amounts of these particles are likely to disperse over long distances. Wind dispersal processes are highly stochastic due to variation in atmospheric conditions, so that there may be considerable variation between individual dispersal patterns. This, in turn, is reflected to the large amount of variation in annual levels of cross-pollination between GM and conventional populations. Even though land-use practices have effects on the average levels of cross-pollination between GM and conventional fields, the level of transgenic contamination of a conventional crop remains highly stochastic. The demographic effects of a transgene have impacts on the establishment of trangenic plants amongst conventional genotypes of the same species. If the transgene gives a plant a considerable fitness advantage in comparison to conventional genotypes, the spread of transgenes to conventional population can be strongly increased. In such cases, dominance of the transgene considerably increases gene flow from GM to conventional populations, due to the enhanced fitness of heterozygous hybrids. The fitness of GM plants in conventional populations can be reduced by linking the selectively favoured primary transgene to a disfavoured mitigation transgene. Recombination between these transgenes is a major risk related to this technique, especially because it tends to take place amongst the conventional genotypes and thus promotes the establishment of invasive transgenic plants in conventional populations.
Resumo:
This thesis addresses modeling of financial time series, especially stock market returns and daily price ranges. Modeling data of this kind can be approached with so-called multiplicative error models (MEM). These models nest several well known time series models such as GARCH, ACD and CARR models. They are able to capture many well established features of financial time series including volatility clustering and leptokurtosis. In contrast to these phenomena, different kinds of asymmetries have received relatively little attention in the existing literature. In this thesis asymmetries arise from various sources. They are observed in both conditional and unconditional distributions, for variables with non-negative values and for variables that have values on the real line. In the multivariate context asymmetries can be observed in the marginal distributions as well as in the relationships of the variables modeled. New methods for all these cases are proposed. Chapter 2 considers GARCH models and modeling of returns of two stock market indices. The chapter introduces the so-called generalized hyperbolic (GH) GARCH model to account for asymmetries in both conditional and unconditional distribution. In particular, two special cases of the GARCH-GH model which describe the data most accurately are proposed. They are found to improve the fit of the model when compared to symmetric GARCH models. The advantages of accounting for asymmetries are also observed through Value-at-Risk applications. Both theoretical and empirical contributions are provided in Chapter 3 of the thesis. In this chapter the so-called mixture conditional autoregressive range (MCARR) model is introduced, examined and applied to daily price ranges of the Hang Seng Index. The conditions for the strict and weak stationarity of the model as well as an expression for the autocorrelation function are obtained by writing the MCARR model as a first order autoregressive process with random coefficients. The chapter also introduces inverse gamma (IG) distribution to CARR models. The advantages of CARR-IG and MCARR-IG specifications over conventional CARR models are found in the empirical application both in- and out-of-sample. Chapter 4 discusses the simultaneous modeling of absolute returns and daily price ranges. In this part of the thesis a vector multiplicative error model (VMEM) with asymmetric Gumbel copula is found to provide substantial benefits over the existing VMEM models based on elliptical copulas. The proposed specification is able to capture the highly asymmetric dependence of the modeled variables thereby improving the performance of the model considerably. The economic significance of the results obtained is established when the information content of the volatility forecasts derived is examined.
Resumo:
This study focuses on the theory of individual rights that the German theologian Conrad Summenhart (1455-1502) explicated in his massive work Opus septipartitum de contractibus pro foro conscientiae et theologico. The central question to be studied is: How does Summenhart understand the concept of an individual right and its immediate implications? The basic premiss of this study is that in Opus septipartitum Summenhart composed a comprehensive theory of individual rights as a contribution to the on-going medieval discourse on rights. With this rationale, the first part of the study concentrates on earlier discussions on rights as the background for Summenhart s theory. Special attention is paid to language in which right was defined in terms of power . In the fourteenth century writers like Hervaeus Natalis and William Ockham maintained that right signifies power by which the right-holder can to use material things licitly. It will also be shown how the attempts to describe what is meant by the term right became more specified and cultivated. Gerson followed the implications that the term power had in natural philosophy and attributed rights to animals and other creatures. To secure right as a normative concept, Gerson utilized the ancient ius suum cuique-principle of justice and introduced a definition in which right was seen as derived from justice. The latter part of this study makes effort to reconstructing Summenhart s theory of individual rights in three sections. The first section clarifies Summenhart s discussion of the right of the individual or the concept of an individual right. Summenhart specified Gerson s description of right as power, taking further use of the language of natural philosophy. In this respect, Summenhart s theory managed to bring an end to a particular continuity of thought that was centered upon a view in which right was understood to signify power to licit action. Perhaps the most significant feature of Summenhart s discussion was the way he explicated the implication of liberty that was present in Gerson s language of rights. Summenhart assimilated libertas with the self-mastery or dominion that in the economic context of discussion took the form of (a moderate) self-ownership. Summenhart discussion also introduced two apparent extensions to Gerson s terminology. First, Summenhart classified right as relation, and second, he equated right with dominion. It is distinctive of Summenhart s view that he took action as the primary determinant of right: Everyone has as much rights or dominion in regard to a thing, as much actions it is licit for him to exercise in regard to the thing. The second section elaborates Summenhart s discussion of the species dominion, which delivered an answer to the question of what kind of rights exist, and clarified thereby the implications of the concept of an individual right. The central feature in Summenhart s discussion was his conscious effort to systematize Gerson s language by combining classifications of dominion into a coherent whole. In this respect, his treatement of the natural dominion is emblematic. Summenhart constructed the concept of natural dominion by making use of the concepts of foundation (founded on a natural gift) and law (according to the natural law). In defining natural dominion as dominion founded on a natural gift, Summenhart attributed natural dominion to animals and even to heavenly bodies. In discussing man s natural dominion, Summenhart pointed out that the natural dominion is not sufficiently identified by its foundation, but requires further specification, which Summenhart finds in the idea that natural dominion is appropriate to the subject according to the natural law. This characterization lead him to treat God s dominion as natural dominion. Partly, this was due to Summenhart s specific understanding of the natural law, which made reasonableness as the primary criterion for the natural dominion at the expense of any metaphysical considerations. The third section clarifies Summenhart s discussion of the property rights defined by the positive human law. By delivering an account on juridical property rights Summenhart connected his philosophical and theological theory on rights to the juridical language of his times, and demonstrated that his own language of rights was compatible with current juridical terminology. Summenhart prepared his discussion of property rights with an account of the justification for private property, which gave private property a direct and strong natural law-based justification. Summenhart s discussion of the four property rights usus, usufructus, proprietas, and possession aimed at delivering a detailed report of the usage of these concepts in juridical discourse. His discussion was characterized by extensive use of the juridical source texts, which was more direct and verbal the more his discussion became entangled with the details of juridical doctrine. At the same time he promoted his own language on rights, especially by applying the idea of right as relation. He also showed recognizable effort towards systematizing juridical language related to property rights.
Resumo:
Hydrologic impacts of climate change are usually assessed by downscaling the General Circulation Model (GCM) output of large-scale climate variables to local-scale hydrologic variables. Such an assessment is characterized by uncertainty resulting from the ensembles of projections generated with multiple GCMs, which is known as intermodel or GCM uncertainty. Ensemble averaging with the assignment of weights to GCMs based on model evaluation is one of the methods to address such uncertainty and is used in the present study for regional-scale impact assessment. GCM outputs of large-scale climate variables are downscaled to subdivisional-scale monsoon rainfall. Weights are assigned to the GCMs on the basis of model performance and model convergence, which are evaluated with the Cumulative Distribution Functions (CDFs) generated from the downscaled GCM output (for both 20th Century [20C3M] and future scenarios) and observed data. Ensemble averaging approach, with the assignment of weights to GCMs, is characterized by the uncertainty caused by partial ignorance, which stems from nonavailability of the outputs of some of the GCMs for a few scenarios (in Intergovernmental Panel on Climate Change [IPCC] data distribution center for Assessment Report 4 [AR4]). This uncertainty is modeled with imprecise probability, i.e., the probability being represented as an interval gray number. Furthermore, the CDF generated with one GCM is entirely different from that with another and therefore the use of multiple GCMs results in a band of CDFs. Representing this band of CDFs with a single valued weighted mean CDF may be misleading. Such a band of CDFs can only be represented with an envelope that contains all the CDFs generated with a number of GCMs. Imprecise CDF represents such an envelope, which not only contains the CDFs generated with all the available GCMs but also to an extent accounts for the uncertainty resulting from the missing GCM output. This concept of imprecise probability is also validated in the present study. The imprecise CDFs of monsoon rainfall are derived for three 30-year time slices, 2020s, 2050s and 2080s, with A1B, A2 and B1 scenarios. The model is demonstrated with the prediction of monsoon rainfall in Orissa meteorological subdivision, which shows a possible decreasing trend in the future.
Resumo:
Downscaling to station-scale hydrologic variables from large-scale atmospheric variables simulated by general circulation models (GCMs) is usually necessary to assess the hydrologic impact of climate change. This work presents CRF-downscaling, a new probabilistic downscaling method that represents the daily precipitation sequence as a conditional random field (CRF). The conditional distribution of the precipitation sequence at a site, given the daily atmospheric (large-scale) variable sequence, is modeled as a linear chain CRF. CRFs do not make assumptions on independence of observations, which gives them flexibility in using high-dimensional feature vectors. Maximum likelihood parameter estimation for the model is performed using limited memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) optimization. Maximum a posteriori estimation is used to determine the most likely precipitation sequence for a given set of atmospheric input variables using the Viterbi algorithm. Direct classification of dry/wet days as well as precipitation amount is achieved within a single modeling framework. The model is used to project the future cumulative distribution function of precipitation. Uncertainty in precipitation prediction is addressed through a modified Viterbi algorithm that predicts the n most likely sequences. The model is applied for downscaling monsoon (June-September) daily precipitation at eight sites in the Mahanadi basin in Orissa, India, using the MIROC3.2 medium-resolution GCM. The predicted distributions at all sites show an increase in the number of wet days, and also an increase in wet day precipitation amounts. A comparison of current and future predicted probability density functions for daily precipitation shows a change in shape of the density function with decreasing probability of lower precipitation and increasing probability of higher precipitation.
Resumo:
Genetic engineering of Bacillus thuringiensis (Bt) Cry proteins has resulted in the synthesis of various novel toxin proteins with enhanced insecticidal activity and specificity towards different insect pests. In this study, a fusion protein consisting of the DI–DII domains of Cry1Ac and garlic lectin (ASAL) has been designed in silico by replacing the DIII domain of Cry1Ac with ASAL. The binding interface between the DI–DII domains of Cry1Ac and lectin has been identified using protein–protein docking studies. Free energy of binding calculations and interaction profiles between the Cry1Ac and lectin domains confirmed the stability of fusion protein. A total of 18 hydrogen bonds was observed in the DI–DII–lectin fusion protein compared to 11 hydrogen bonds in the Cry1Ac (DI–DII–DIII) protein. Molecular mechanics/Poisson–Boltzmann (generalized-Born) surface area [MM/PB (GB) SA] methods were used for predicting free energy of interactions of the fusion proteins. Protein–protein docking studies based on the number of hydrogen bonds, hydrophobic interactions, aromatic–aromatic, aromatic–sulphur, cation–pi interactions and binding energy of Cry1Ac/fusion proteins with the aminopeptidase N (APN) of Manduca sexta rationalised the higher binding affinity of the fusion protein with the APN receptor compared to that of the Cry1Ac–APN complex, as predicted by ZDOCK, Rosetta and ClusPro analysis. The molecular binding interface between the fusion protein and the APN receptor is well packed, analogously to that of the Cry1Ac–APN complex. These findings offer scope for the design and development of customized fusion molecules for improved pest management in crop plants.
Resumo:
This paper highlights the Hybrid agent construction model being developed that allows the description and development of autonomous agents in SAGE (Scalable, fault Tolerant Agent Grooming Environment) - a second generation FIPA-Compliant Multi-Agent system. We aim to provide the programmer with a generic and well defined agent architecture enabling the development of sophisticated agents on SAGE, possessing the desired properties of autonomous agents - reactivity, pro-activity, social ability and knowledge based reasoning. © Springer-Verlag Berlin Heidelberg 2005.
Resumo:
Parent involvement is widely accepted as being associated with children’s improved educational outcomes. However, the role of early school-based parent involvement is still being established. This study investigated the mediating role of self-regulated learning behaviors in the relationship between early school-based parent involvement and children’s academic achievement, using data from the Longitudinal Study of Australian Children (N = 2616). Family socioeconomic position, Aboriginal and Torres Strait Islander status, language background, child gender and cognitive competence, were controlled, as well home and community based parent involvement activity in order to derive a more confident interpretation of the results. Structural equation modeling analyses showed that children’s self-regulated learning behaviors fully mediated the relationships between school-based parent involvement at Grade 1 and children’s reading achievement at Grade 3. Importantly, these relationships were evident for children across all socio-economic backgrounds. Although there was no direct relationship between parent involvement at Grade 1 and numeracy achievement at Grade 3, parent involvement was indirectly associated with higher children’s numeracy achievement through children’s self-regulation of learning behaviors, though this relationship was stronger for children from middle and higher socio-economic backgrounds. Implications for policy and practice are discussed, and further research recommended.
Resumo:
Importance of the field: The shift in focus from ligand based design approaches to target based discovery over the last two to three decades has been a major milestone in drug discovery research. Currently, it is witnessing another major paradigm shift by leaning towards the holistic systems based approaches rather the reductionist single molecule based methods. The effect of this new trend is likely to be felt strongly in terms of new strategies for therapeutic intervention, new targets individually and in combinations, and design of specific and safer drugs. Computational modeling and simulation form important constituents of new-age biology because they are essential to comprehend the large-scale data generated by high-throughput experiments and to generate hypotheses, which are typically iterated with experimental validation. Areas covered in this review: This review focuses on the repertoire of systems-level computational approaches currently available for target identification. The review starts with a discussion on levels of abstraction of biological systems and describes different modeling methodologies that are available for this purpose. The review then focuses on how such modeling and simulations can be applied for drug target discovery. Finally, it discusses methods for studying other important issues such as understanding targetability, identifying target combinations and predicting drug resistance, and considering them during the target identification stage itself. What the reader will gain: The reader will get an account of the various approaches for target discovery and the need for systems approaches, followed by an overview of the different modeling and simulation approaches that have been developed. An idea of the promise and limitations of the various approaches and perspectives for future development will also be obtained. Take home message: Systems thinking has now come of age enabling a `bird's eye view' of the biological systems under study, at the same time allowing us to `zoom in', where necessary, for a detailed description of individual components. A number of different methods available for computational modeling and simulation of biological systems can be used effectively for drug target discovery.
Resumo:
BACKGROUND Polygenic risk scores comprising established susceptibility variants have shown to be informative classifiers for several complex diseases including prostate cancer. For prostate cancer it is unknown if inclusion of genetic markers that have so far not been associated with prostate cancer risk at a genome-wide significant level will improve disease prediction. METHODS We built polygenic risk scores in a large training set comprising over 25,000 individuals. Initially 65 established prostate cancer susceptibility variants were selected. After LD pruning additional variants were prioritized based on their association with prostate cancer. Six-fold cross validation was performed to assess genetic risk scores and optimize the number of additional variants to be included. The final model was evaluated in an independent study population including 1,370 cases and 1,239 controls. RESULTS The polygenic risk score with 65 established susceptibility variants provided an area under the curve (AUC) of 0.67. Adding an additional 68 novel variants significantly increased the AUC to 0.68 (P = 0.0012) and the net reclassification index with 0.21 (P = 8.5E-08). All novel variants were located in genomic regions established as associated with prostate cancer risk. CONCLUSIONS Inclusion of additional genetic variants from established prostate cancer susceptibility regions improves disease prediction. Prostate 75:1467–1474, 2015. © 2015 Wiley Periodicals, Inc.