932 resultados para Stochastic simulation methods
Resumo:
We study a stochastic lattice model describing the dynamics of coexistence of two interacting biological species. The model comprehends the local processes of birth, death, and diffusion of individuals of each species and is grounded on interaction of the predator-prey type. The species coexistence can be of two types: With self-sustained coupled time oscillations of population densities and without oscillations. We perform numerical simulations of the model on a square lattice and analyze the temporal behavior of each species by computing the time correlation functions as well as the spectral densities. This analysis provides an appropriate characterization of the different types of coexistence. It is also used to examine linked population cycles in nature and in experiment.
Resumo:
We study the electronic transport properties of a dual-gated bilayer graphene nanodevice via first-principles calculations. We investigate the electric current as a function of gate length and temperature. Under the action of an external electrical field we show that even for gate lengths up 100 angstrom, a nonzero current is exhibited. The results can be explained by the presence of a tunneling regime due the remanescent states in the gap. We also discuss the conditions to reach the charge neutrality point in a system free of defects and extrinsic carrier doping.
Resumo:
Positional information in developing embryos is specified by spatial gradients of transcriptional regulators. One of the classic systems for studying this is the activation of the hunchback (hb) gene in early fruit fly (Drosophila) segmentation by the maternally-derived gradient of the Bicoid (Bcd) protein. Gene regulation is subject to intrinsic noise which can produce variable expression. This variability must be constrained in the highly reproducible and coordinated events of development. We identify means by which noise is controlled during gene expression by characterizing the dependence of hb mRNA and protein output noise on hb promoter structure and transcriptional dynamics. We use a stochastic model of the hb promoter in which the number and strength of Bcd and Hb (self-regulatory) binding sites can be varied. Model parameters are fit to data from WT embryos, the self-regulation mutant hb(14F), and lacZ reporter constructs using different portions of the hb promoter. We have corroborated model noise predictions experimentally. The results indicate that WT (self-regulatory) Hb output noise is predominantly dependent on the transcription and translation dynamics of its own expression, rather than on Bcd fluctuations. The constructs and mutant, which lack self-regulation, indicate that the multiple Bcd binding sites in the hb promoter (and their strengths) also play a role in buffering noise. The model is robust to the variation in Bcd binding site number across a number of fly species. This study identifies particular ways in which promoter structure and regulatory dynamics reduce hb output noise. Insofar as many of these are common features of genes (e. g. multiple regulatory sites, cooperativity, self-feedback), the current results contribute to the general understanding of the reproducibility and determinacy of spatial patterning in early development.
Resumo:
With each directed acyclic graph (this includes some D-dimensional lattices) one can associate some Abelian algebras that we call directed Abelian algebras (DAAs). On each site of the graph one attaches a generator of the algebra. These algebras depend on several parameters and are semisimple. Using any DAA, one can define a family of Hamiltonians which give the continuous time evolution of a stochastic process. The calculation of the spectra and ground-state wave functions (stationary state probability distributions) is an easy algebraic exercise. If one considers D-dimensional lattices and chooses Hamiltonians linear in the generators, in finite-size scaling the Hamiltonian spectrum is gapless with a critical dynamic exponent z=D. One possible application of the DAA is to sandpile models. In the paper we present this application, considering one- and two-dimensional lattices. In the one-dimensional case, when the DAA conserves the number of particles, the avalanches belong to the random walker universality class (critical exponent sigma(tau)=3/2). We study the local density of particles inside large avalanches, showing a depletion of particles at the source of the avalanche and an enrichment at its end. In two dimensions we did extensive Monte-Carlo simulations and found sigma(tau)=1.780 +/- 0.005.
Resumo:
Thanks to recent advances in molecular biology, allied to an ever increasing amount of experimental data, the functional state of thousands of genes can now be extracted simultaneously by using methods such as cDNA microarrays and RNA-Seq. Particularly important related investigations are the modeling and identification of gene regulatory networks from expression data sets. Such a knowledge is fundamental for many applications, such as disease treatment, therapeutic intervention strategies and drugs design, as well as for planning high-throughput new experiments. Methods have been developed for gene networks modeling and identification from expression profiles. However, an important open problem regards how to validate such approaches and its results. This work presents an objective approach for validation of gene network modeling and identification which comprises the following three main aspects: (1) Artificial Gene Networks (AGNs) model generation through theoretical models of complex networks, which is used to simulate temporal expression data; (2) a computational method for gene network identification from the simulated data, which is founded on a feature selection approach where a target gene is fixed and the expression profile is observed for all other genes in order to identify a relevant subset of predictors; and (3) validation of the identified AGN-based network through comparison with the original network. The proposed framework allows several types of AGNs to be generated and used in order to simulate temporal expression data. The results of the network identification method can then be compared to the original network in order to estimate its properties and accuracy. Some of the most important theoretical models of complex networks have been assessed: the uniformly-random Erdos-Renyi (ER), the small-world Watts-Strogatz (WS), the scale-free Barabasi-Albert (BA), and geographical networks (GG). The experimental results indicate that the inference method was sensitive to average degree k variation, decreasing its network recovery rate with the increase of k. The signal size was important for the inference method to get better accuracy in the network identification rate, presenting very good results with small expression profiles. However, the adopted inference method was not sensible to recognize distinct structures of interaction among genes, presenting a similar behavior when applied to different network topologies. In summary, the proposed framework, though simple, was adequate for the validation of the inferred networks by identifying some properties of the evaluated method, which can be extended to other inference methods.
Resumo:
We consider binary infinite order stochastic chains perturbed by a random noise. This means that at each time step, the value assumed by the chain can be randomly and independently flipped with a small fixed probability. We show that the transition probabilities of the perturbed chain are uniformly close to the corresponding transition probabilities of the original chain. As a consequence, in the case of stochastic chains with unbounded but otherwise finite variable length memory, we show that it is possible to recover the context tree of the original chain, using a suitable version of the algorithm Context, provided that the noise is small enough.
Resumo:
Background: There are several studies in the literature depicting measurement error in gene expression data and also, several others about regulatory network models. However, only a little fraction describes a combination of measurement error in mathematical regulatory networks and shows how to identify these networks under different rates of noise. Results: This article investigates the effects of measurement error on the estimation of the parameters in regulatory networks. Simulation studies indicate that, in both time series (dependent) and non-time series (independent) data, the measurement error strongly affects the estimated parameters of the regulatory network models, biasing them as predicted by the theory. Moreover, when testing the parameters of the regulatory network models, p-values computed by ignoring the measurement error are not reliable, since the rate of false positives are not controlled under the null hypothesis. In order to overcome these problems, we present an improved version of the Ordinary Least Square estimator in independent (regression models) and dependent (autoregressive models) data when the variables are subject to noises. Moreover, measurement error estimation procedures for microarrays are also described. Simulation results also show that both corrected methods perform better than the standard ones (i.e., ignoring measurement error). The proposed methodologies are illustrated using microarray data from lung cancer patients and mouse liver time series data. Conclusions: Measurement error dangerously affects the identification of regulatory network models, thus, they must be reduced or taken into account in order to avoid erroneous conclusions. This could be one of the reasons for high biological false positive rates identified in actual regulatory network models.
Resumo:
We study a general stochastic rumour model in which an ignorant individual has a certain probability of becoming a stifler immediately upon hearing the rumour. We refer to this special kind of stifler as an uninterested individual. Our model also includes distinct rates for meetings between two spreaders in which both become stiflers or only one does, so that particular cases are the classical Daley-Kendall and Maki-Thompson models. We prove a Law of Large Numbers and a Central Limit Theorem for the proportions of those who ultimately remain ignorant and those who have heard the rumour but become uninterested in it.
Resumo:
The solvation effect of the ionic liquid 1-N-butyl-3-methylimidazolium hexafluorophosphate on nucleophilic substitution reactions of halides toward the aliphatic carbon of methyl p-nitrobenzenesulfonate (pNBS) was investigated by computer simulations. The calculations were performed by using a hybrid quantum-mechanical/molecular-mechanical (QM/MM) methodology. A semiempirical Hamiltonian was first parametrized on the basis of comparison with ab initio calculations for Cl(-) and Br(-) reaction with pNBS at gas phase. In condensed phase, free energy profiles were obtained for both reactions. The calculated reaction barriers are in agreement with experiment. The structure of species solvated by the ionic liquid was followed along the reaction progress from the reagents, through the transition state, to the final products. The simulations indicate that this substitution reaction in the ionic liquid is slower than in nonpolar molecular solvents proper to significant stabilization of the halide anion by the ionic liquid in comparison with the transition state with delocalized charge. Solute-solvent interactions in the first solvation shell contain several hydrogen bonds that are formed or broken in response to charge density variation along the reaction coordinate. The detailed structural analysis can be used to rationalize the design of new ionic liquids with tailored solvation properties. (c) 2008 American Institute of Physics.
Resumo:
Structural and dynamical properties of liquid trimethylphosphine (TMP), (CH(3))(3)P, as a function of temperature is investigated by molecular dynamics (MD) simulations. The force field used in the MD simulations, which has been proposed from molecular mechanics and quantum chemistry calculations, is able to reproduce the experimental density of liquid TMP at room temperature. Equilibrium structure is investigated by the usual radial distribution function, g(r), and also in the reciprocal space by the static structure factor, S(k). On the basis of center of mass distances, liquid TMP behaves like a simple liquid of almost spherical particles, but orientational correlation due to dipole-dipole interactions is revealed at short-range distances. Single particle and collective dynamics are investigated by several time correlation functions. At high temperatures, diffusion and reorientation occur at the same time range as relaxation of the liquid structure. Decoupling of these dynamic properties starts below ca. 220 K, when rattling dynamics of a given TMP molecules due to the cage effect of neighbouring molecules becomes important. (C) 2011 American Institute of Physics. [doi: 10.1063/1.3624408]
Resumo:
Background: Mutations in TP53 are common events during carcinogenesis. In addition to gene mutations, several reports have focused on TP53 polymorphisms as risk factors for malignant disease. Many studies have highlighted that the status of the TP53 codon 72 polymorphism could influence cancer susceptibility. However, the results have been inconsistent and various methodological features can contribute to departures from Hardy-Weinberg equilibrium, a condition that may influence the disease risk estimates. The most widely accepted method of detecting genotyping error is to confirm genotypes by sequencing and/or via a separate method. Results: We developed two new genotyping methods for TP53 codon 72 polymorphism detection: Denaturing High Performance Liquid Chromatography (DHPLC) and Dot Blot hybridization. These methods were compared with Restriction Fragment Length Polymorphism (RFLP) using two different restriction enzymes. We observed high agreement among all methodologies assayed. Dot-blot hybridization and DHPLC results were more highly concordant with each other than when either of these methods was compared with RFLP. Conclusions: Although variations may occur, our results indicate that DHPLC and Dot Blot hybridization can be used as reliable screening methods for TP53 codon 72 polymorphism detection, especially in molecular epidemiologic studies, where high throughput methodologies are required.
Resumo:
Due to the worldwide increase in demand for biofuels, the area cultivated with sugarcane is expected to increase. For environmental and economic reasons, an increasing proportion of the areas are being harvested without burning, leaving the residues on the soil surface. This periodical input of residues affects soil physical, chemical and biological properties, as well as plant growth and nutrition. Modeling can be a useful tool in the study of the complex interactions between the climate, residue quality, and the biological factors controlling plant growth and residue decomposition. The approach taken in this work was to parameterize the CENTURY model for the sugarcane crop, to simulate the temporal dynamics of aboveground phytomass and litter decomposition, and to validate the model through field experiment data. When studying aboveground growth, burned and unburned harvest systems were compared, as well as the effect of mineral fertilizer and organic residue applications. The simulations were performed with data from experiments with different durations, from 12 months to 60 years, in Goiana, TimbaA(0)ba and Pradpolis, Brazil; Harwood, Mackay and Tully, Australia; and Mount Edgecombe, South Africa. The differentiation of two pools in the litter, with different decomposition rates, was found to be a relevant factor in the simulations made. Originally, the model had a basically unlimited layer of mulch directly available for decomposition, 5,000 g m(-2). Through a parameter optimization process, the thickness of the mulch layer closer to the soil, more vulnerable to decomposition, was set as 110 g m(-2). By changing the layer of mulch at any given time available for decomposition, the sugarcane residues decomposition simulations where close to measured values (R (2) = 0.93), contributing to making the CENTURY model a tool for the study of sugarcane litter decomposition patterns. The CENTURY model accurately simulated aboveground carbon stalk values (R (2) = 0.76), considering burned and unburned harvest systems, plots with and without nitrogen fertilizer and organic amendment applications, in different climates and soil conditions.
Resumo:
Currently there is a trend for the expansion of the area cropped with sugarcane (Saccharum officinarum L.), driven by an increase in the world demand for biofuels, due to economical, environmental, and geopolitical issues. Although sugarcane is traditionally harvested by burning dried leaves and tops, the unburned, mechanized harvest has been progressively adopted. The use of process based models is useful in understanding the effects of plant litter in soil C dynamics. The objective of this work was to use the CENTURY model in evaluating the effect of sugarcane residue management in the temporal dynamics of soil C. The approach taken in this work was to parameterize the CENTURY model for the sugarcane crop, to simulate the temporal dynamics of soil C, validating the model through field experiment data, and finally to make predictions in the long term regarding soil C. The main focus of this work was the comparison of soil C stocks between the burned and unburned litter management systems, but the effect of mineral fertilizer and organic residue applications were also evaluated. The simulations were performed with data from experiments with different durations, from 1 to 60 yr, in Goiana and Timbauba, Pernambuco, and Pradopolis, Sao Paulo, all in Brazil; and Mount Edgecombe, Kwazulu-Natal, South Africa. It was possible to simulate the temporal dynamics of soil C (R(2) = 0.89). The predictions made with the model revealed that there is, in the long term, a trend for higher soil C stocks with the unburned management. This increase is conditioned by factors such as climate, soil texture, time of adoption of the unburned system, and N fertilizer management.
Resumo:
It has been demonstrated that laser induced breakdown spectrometry (LIBS) can be used as an alternative method for the determination of macro (P, K. Ca, Mg) and micronutrients (B, Fe, Cu, Mn, Zn) in pellets of plant materials. However, information is required regarding the sample preparation for plant analysis by LIBS. In this work, methods involving cryogenic grinding and planetary ball milling were evaluated for leaves comminution before pellets preparation. The particle sizes were associated to chemical sample properties such as fiber and cellulose contents, as well as to pellets porosity and density. The pellets were ablated at 30 different sites by applying 25 laser pulses per site (Nd:YAG@1064 nm, 5 ns, 10 Hz, 25J cm(-2)). The plasma emission collected by lenses was directed through an optical fiber towards a high resolution echelle spectrometer equipped with an ICCD. Delay time and integration time gate were fixed at 2.0 and 4.5 mu s, respectively. Experiments carried out with pellets of sugarcane, orange tree and soy leaves showed a significant effect of the plant species for choosing the most appropriate grinding conditions. By using ball milling with agate materials, 20 min grinding for orange tree and soy, and 60 min for sugarcane leaves led to particle size distributions generally lower than 75 mu m. Cryogenic grinding yielded similar particle size distributions after 10 min for orange tree, 20 min for soy and 30 min for sugarcane leaves. There was up to 50% emission signal enhancement on LIBS measurements for most elements by improving particle size distribution and consequently the pellet porosity. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
The aim of this paper is to highlight some of the methods of imagetic information representation, reviewing the literature of the area and proposing a model of methodology adapted to Brazilian museums. An elaboration of a methodology of imagetic information representation is developed based on Brazilian characteristics of information treatment in order to adapt it to museums. Finally, spreadsheets that show this methodology are presented.