975 resultados para Numerical experiments
Resumo:
Using numerical simulations, we compare properties of knotted DNA molecules that are either torsionally relaxed or supercoiled. We observe that DNA supercoiling tightens knotted portions of DNA molecules and accentuates the difference in curvature between knotted and unknotted regions. The increased curvature of knotted regions is expected to make them preferential substrates of type IIA topoisomerases because various earlier experiments have concluded that type IIA DNA topoisomerases preferentially interact with highly curved DNA regions. The supercoiling-induced tightening of DNA knots observed here shows that torsional tension in DNA may serve to expose DNA knots to the unknotting action of type IIA topoisomerases, and thus explains how these topoisomerases could maintain a low knotting equilibrium in vivo, even for long DNA molecules.
Resumo:
In this paper, we develop numerical algorithms that use small requirements of storage and operations for the computation of invariant tori in Hamiltonian systems (exact symplectic maps and Hamiltonian vector fields). The algorithms are based on the parameterization method and follow closely the proof of the KAM theorem given in [LGJV05] and [FLS07]. They essentially consist in solving a functional equation satisfied by the invariant tori by using a Newton method. Using some geometric identities, it is possible to perform a Newton step using little storage and few operations. In this paper we focus on the numerical issues of the algorithms (speed, storage and stability) and we refer to the mentioned papers for the rigorous results. We show how to compute efficiently both maximal invariant tori and whiskered tori, together with the associated invariant stable and unstable manifolds of whiskered tori. Moreover, we present fast algorithms for the iteration of the quasi-periodic cocycles and the computation of the invariant bundles, which is a preliminary step for the computation of invariant whiskered tori. Since quasi-periodic cocycles appear in other contexts, this section may be of independent interest. The numerical methods presented here allow to compute in a unified way primary and secondary invariant KAM tori. Secondary tori are invariant tori which can be contracted to a periodic orbit. We present some preliminary results that ensure that the methods are indeed implementable and fast. We postpone to a future paper optimized implementations and results on the breakdown of invariant tori.
Resumo:
This study investigates the issue of self-selection of stakeholders into participation and collaboration in policy-relevant experiments. We document and test the implications of self-selection in the context of randomised policy experiment we conducted in primary schools in the UK. The main questions we ask are (1) is there evidence of selection on key observable characteristics likely to matter for the outcome of interest and (2) does selection matter for the estimates of treatment eff ects. The experimental work consists in testing the e ffects of an intervention aimed at encouraging children to make more healthy choices at lunch. We recruited schools through local authorities and randomised schools across two incentive treatments and a control group. We document the selection taking place both at the level of local authorities and at the school level. Overall we nd mild evidence of selection on key observables such as obesity levels and socio-economic characteristics. We find evidence of selection along indicators of involvement in healthy lifestyle programmes at the school level, but the magnitude is small. Moreover, We do not find signifi cant di erences in the treatment e ffects of the experiment between variables which, albeit to a mild degree, are correlated with selection into the experiment. To our knowledge, this is the rst study providing direct evidence on the magnitude of self-selection in fi eld experiments.
Resumo:
The ways in which preferences respond to the varying stress of economic environments is a key question for behavioral economics and public policy. We conducted a laboratory experiment to investigate the effects of stress on financial decision making among individuals aged 50 and older. Using the cold pressor task as a physiological stressor, and a series of intelligence tests as cognitive stressors, we find that stress increases subjective discounting rates, has no effect on the degree of risk-aversion, and substantially lowers the effort individuals make to learn about financial decisions.
Resumo:
Type 2 diabetes mellitus (T2DM) is a major disease affecting nearly 280 million people worldwide. Whilst the pathophysiological mechanisms leading to disease are poorly understood, dysfunction of the insulin-producing pancreatic beta-cells is key event for disease development. Monitoring the gene expression profiles of pancreatic beta-cells under several genetic or chemical perturbations has shed light on genes and pathways involved in T2DM. The EuroDia database has been established to build a unique collection of gene expression measurements performed on beta-cells of three organisms, namely human, mouse and rat. The Gene Expression Data Analysis Interface (GEDAI) has been developed to support this database. The quality of each dataset is assessed by a series of quality control procedures to detect putative hybridization outliers. The system integrates a web interface to several standard analysis functions from R/Bioconductor to identify differentially expressed genes and pathways. It also allows the combination of multiple experiments performed on different array platforms of the same technology. The design of this system enables each user to rapidly design a custom analysis pipeline and thus produce their own list of genes and pathways. Raw and normalized data can be downloaded for each experiment. The flexible engine of this database (GEDAI) is currently used to handle gene expression data from several laboratory-run projects dealing with different organisms and platforms. Database URL: http://eurodia.vital-it.ch.
Resumo:
PECUBE is a three-dimensional thermal-kinematic code capable of solving the heat production-diffusion-advection equation under a temporally varying surface boundary condition. It was initially developed to assess the effects of time-varying surface topography (relief) on low-temperature thermochronological datasets. Thermochronometric ages are predicted by tracking the time-temperature histories of rock-particles ending up at the surface and by combining these with various age-prediction models. In the decade since its inception, the PECUBE code has been under continuous development as its use became wider and addressed different tectonic-geomorphic problems. This paper describes several major recent improvements in the code, including its integration with an inverse-modeling package based on the Neighborhood Algorithm, the incorporation of fault-controlled kinematics, several different ways to address topographic and drainage change through time, the ability to predict subsurface (tunnel or borehole) data, prediction of detrital thermochronology data and a method to compare these with observations, and the coupling with landscape-evolution (or surface-process) models. Each new development is described together with one or several applications, so that the reader and potential user can clearly assess and make use of the capabilities of PECUBE. We end with describing some developments that are currently underway or should take place in the foreseeable future. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
SUMMARY : Eukaryotic DNA interacts with the nuclear proteins using non-covalent ionic interactions. Proteins can recognize specific nucleotide sequences based on the sterical interactions with the DNA and these specific protein-DNA interactions are the basis for many nuclear processes, e.g. gene transcription, chromosomal replication, and recombination. New technology termed ChIP-Seq has been recently developed for the analysis of protein-DNA interactions on a whole genome scale and it is based on immunoprecipitation of chromatin and high-throughput DNA sequencing procedure. ChIP-Seq is a novel technique with a great potential to replace older techniques for mapping of protein-DNA interactions. In this thesis, we bring some new insights into the ChIP-Seq data analysis. First, we point out to some common and so far unknown artifacts of the method. Sequence tag distribution in the genome does not follow uniform distribution and we have found extreme hot-spots of tag accumulation over specific loci in the human and mouse genomes. These artifactual sequence tags accumulations will create false peaks in every ChIP-Seq dataset and we propose different filtering methods to reduce the number of false positives. Next, we propose random sampling as a powerful analytical tool in the ChIP-Seq data analysis that could be used to infer biological knowledge from the massive ChIP-Seq datasets. We created unbiased random sampling algorithm and we used this methodology to reveal some of the important biological properties of Nuclear Factor I DNA binding proteins. Finally, by analyzing the ChIP-Seq data in detail, we revealed that Nuclear Factor I transcription factors mainly act as activators of transcription, and that they are associated with specific chromatin modifications that are markers of open chromatin. We speculate that NFI factors only interact with the DNA wrapped around the nucleosome. We also found multiple loci that indicate possible chromatin barrier activity of NFI proteins, which could suggest the use of NFI binding sequences as chromatin insulators in biotechnology applications. RESUME : L'ADN des eucaryotes interagit avec les protéines nucléaires par des interactions noncovalentes ioniques. Les protéines peuvent reconnaître les séquences nucléotidiques spécifiques basées sur l'interaction stérique avec l'ADN, et des interactions spécifiques contrôlent de nombreux processus nucléaire, p.ex. transcription du gène, la réplication chromosomique, et la recombinaison. Une nouvelle technologie appelée ChIP-Seq a été récemment développée pour l'analyse des interactions protéine-ADN à l'échelle du génome entier et cette approche est basée sur l'immuno-précipitation de la chromatine et sur la procédure de séquençage de l'ADN à haut débit. La nouvelle approche ChIP-Seq a donc un fort potentiel pour remplacer les anciennes techniques de cartographie des interactions protéine-ADN. Dans cette thèse, nous apportons de nouvelles perspectives dans l'analyse des données ChIP-Seq. Tout d'abord, nous avons identifié des artefacts très communs associés à cette méthode qui étaient jusqu'à présent insoupçonnés. La distribution des séquences dans le génome ne suit pas une distribution uniforme et nous avons constaté des positions extrêmes d'accumulation de séquence à des régions spécifiques, des génomes humains et de la souris. Ces accumulations des séquences artéfactuelles créera de faux pics dans toutes les données ChIP-Seq, et nous proposons différentes méthodes de filtrage pour réduire le nombre de faux positifs. Ensuite, nous proposons un nouvel échantillonnage aléatoire comme un outil puissant d'analyse des données ChIP-Seq, ce qui pourraient augmenter l'acquisition de connaissances biologiques à partir des données ChIP-Seq. Nous avons créé un algorithme d'échantillonnage aléatoire et nous avons utilisé cette méthode pour révéler certaines des propriétés biologiques importantes de protéines liant à l'ADN nommés Facteur Nucléaire I (NFI). Enfin, en analysant en détail les données de ChIP-Seq pour la famille de facteurs de transcription nommés Facteur Nucléaire I, nous avons révélé que ces protéines agissent principalement comme des activateurs de transcription, et qu'elles sont associées à des modifications de la chromatine spécifiques qui sont des marqueurs de la chromatine ouverte. Nous pensons que lés facteurs NFI interagir uniquement avec l'ADN enroulé autour du nucléosome. Nous avons également constaté plusieurs régions génomiques qui indiquent une éventuelle activité de barrière chromatinienne des protéines NFI, ce qui pourrait suggérer l'utilisation de séquences de liaison NFI comme séquences isolatrices dans des applications de la biotechnologie.
Resumo:
To describe the collective behavior of large ensembles of neurons in neuronal network, a kinetic theory description was developed in [13, 12], where a macroscopic representation of the network dynamics was directly derived from the microscopic dynamics of individual neurons, which are modeled by conductance-based, linear, integrate-and-fire point neurons. A diffusion approximation then led to a nonlinear Fokker-Planck equation for the probability density function of neuronal membrane potentials and synaptic conductances. In this work, we propose a deterministic numerical scheme for a Fokker-Planck model of an excitatory-only network. Our numerical solver allows us to obtain the time evolution of probability distribution functions, and thus, the evolution of all possible macroscopic quantities that are given by suitable moments of the probability density function. We show that this deterministic scheme is capable of capturing the bistability of stationary states observed in Monte Carlo simulations. Moreover, the transient behavior of the firing rates computed from the Fokker-Planck equation is analyzed in this bistable situation, where a bifurcation scenario, of asynchronous convergence towards stationary states, periodic synchronous solutions or damped oscillatory convergence towards stationary states, can be uncovered by increasing the strength of the excitatory coupling. Finally, the computation of moments of the probability distribution allows us to validate the applicability of a moment closure assumption used in [13] to further simplify the kinetic theory.
Resumo:
The Keller-Segel system has been widely proposed as a model for bacterial waves driven by chemotactic processes. Current experiments on E. coli have shown precise structure of traveling pulses. We present here an alternative mathematical description of traveling pulses at a macroscopic scale. This modeling task is complemented with numerical simulations in accordance with the experimental observations. Our model is derived from an accurate kinetic description of the mesoscopic run-and-tumble process performed by bacteria. This model can account for recent experimental observations with E. coli. Qualitative agreements include the asymmetry of the pulse and transition in the collective behaviour (clustered motion versus dispersion). In addition we can capture quantitatively the main characteristics of the pulse such as the speed and the relative size of tails. This work opens several experimental and theoretical perspectives. Coefficients at the macroscopic level are derived from considerations at the cellular scale. For instance the stiffness of the signal integration process turns out to have a strong effect on collective motion. Furthermore the bottom-up scaling allows to perform preliminary mathematical analysis and write efficient numerical schemes. This model is intended as a predictive tool for the investigation of bacterial collective motion.
Resumo:
In this paper, we present and apply a new three-dimensional model for the prediction of canopy-flow and turbulence dynamics in open-channel flow. The approach uses a dynamic immersed boundary technique that is coupled in a sequentially staggered manner to a large eddy simulation. Two different biomechanical models are developed depending on whether the vegetation is dominated by bending or tensile forces. For bending plants, a model structured on the Euler-Bernoulli beam equation has been developed, whilst for tensile plants, an N-pendula model has been developed. Validation against flume data shows good agreement and demonstrates that for a given stem density, the models are able to simulate the extraction of energy from the mean flow at the stem-scale which leads to the drag discontinuity and associated mixing layer.
Resumo:
Warming experiments are increasingly relied on to estimate plant responses to global climate change. For experiments to provide meaningful predictions of future responses, they should reflect the empirical record of responses to temperature variability and recent warming, including advances in the timing of flowering and leafing. We compared phenology (the timing of recurring life history events) in observational studies and warming experiments spanning four continents and 1,634 plant species using a common measure of temperature sensitivity (change in days per degree Celsius). We show that warming experiments underpredict advances in the timing of flowering and leafing by 8.5-fold and 4.0-fold, respectively, compared with long-term observations. For species that were common to both study types, the experimental results did not match the observational data in sign or magnitude. The observational data also showed that species that flower earliest in the spring have the highest temperature sensitivities, but this trend was not reflected in the experimental data. These significant mismatches seem to be unrelated to the study length or to the degree of manipulated warming in experiments. The discrepancy between experiments and observations, however, could arise from complex interactions among multiple drivers in the observational data, or it could arise from remediable artefacts in the experiments that result in lower irradiance and drier soils, thus dampening the phenological responses to manipulated warming. Our results introduce uncertainty into ecosystem models that are informed solely by experiments and suggest that responses to climate change that are predicted using such models should be re-evaluated.
Resumo:
Numerical analyses (correspondence analysis, ascending hierarchical classification, cladistic approach) were applied to the morphological characters of the adults of the genus Phlebotomus Rondani & Berté 1840. They confirm the reliability of the classic classifications, and also redefine the taxonomic and phylogenetic position of certain taxa. Thus, Spelaeophlebotomus Theodor 1948, Idiophlebotomus Quate & Fairchild 1961 and Australophlebotomus Theodor 1948 deserve generic rank. Among the vectors of leishmaniasis, the subgenus Phlebotomus Rondani & Berté 1840 is probably ancient. The results attribute an intermediate taxonomic and phylogenetic position to the taxa Euphlebotomus Theodor 1948 and Anaphlebotomus Theodor 1948, and reveal the probable artificial nature of the latter. The comparatively large numbers of species of subgenera Paraphlebotomus Theodor 1948, Synphlebotomus Theodor 1948 and, above all, Larroussius Nitzulescu 1931 and Adlerius Nitzulescu 1931, suggest that they are relatively recent. The development of adult morphological characters, the validity of their use in taxonomy and proposals for further studies are discussed.
Resumo:
Numerical analyses (correspondence analysis, ascending hierarchical classification, and cladistics) were done with morphological characters of adult phlebotomine sand flies. The resulting classification largely confirms that of classical taxonomy for supra-specific groups from the Old World, though the positions of some groups are adjusted. The taxa Spelaeophlebotomus Theodor 1948, Idiophlebotomus Quate & Fairchild 1961, Australophlebotomus Theodor 1948 and Chinius Leng 1987 are notably distinct from other Old World groups, particularly from the genus Phlebotomus Rondani & Berté 1840. Spelaeomyia Theodor 1948 and, in particular, Parvidens Theodor & Mesghali 1964 are clearly separate from Sergentomyia França & Parrot 1920.
Resumo:
Recent technological advances in remote sensing have enabled investigation of the morphodynamics and hydrodynamics of large rivers. However, measuring topography and flow in these very large rivers is time consuming and thus often constrains the spatial resolution and reach-length scales that can be monitored. Similar constraints exist for computational fluid dynamics (CFD) studies of large rivers, requiring maximization of mesh-or grid-cell dimensions and implying a reduction in the representation of bedform-roughness elements that are of the order of a model grid cell or less, even if they are represented in available topographic data. These ``subgrid'' elements must be parameterized, and this paper applies and considers the impact of roughness-length treatments that include the effect of bed roughness due to ``unmeasured'' topography. CFD predictions were found to be sensitive to the roughness-length specification. Model optimization was based on acoustic Doppler current profiler measurements and estimates of the water surface slope for a variety of roughness lengths. This proved difficult as the metrics used to assess optimal model performance diverged due to the effects of large bedforms that are not well parameterized in roughness-length treatments. However, the general spatial flow patterns are effectively predicted by the model. Changes in roughness length were shown to have a major impact upon flow routing at the channel scale. The results also indicate an absence of secondary flow circulation cells in the reached studied, and suggest simpler two-dimensional models may have great utility in the investigation of flow within large rivers. Citation: Sandbach, S. D. et al. (2012), Application of a roughness-length representation to parameterize energy loss in 3-D numerical simulations of large rivers, Water Resour. Res., 48, W12501, doi: 10.1029/2011WR011284.