969 resultados para intervention modelling experiments


Relevância:

30.00% 30.00%

Publicador:

Resumo:

A fast simple climate modelling approach is developed for predicting and helping to understand general circulation model (GCM) simulations. We show that the simple model reproduces the GCM results accurately, for global mean surface air temperature change and global-mean heat uptake projections from 9 GCMs in the fifth coupled model inter-comparison project (CMIP5). This implies that understanding gained from idealised CO2 step experiments is applicable to policy-relevant scenario projections. Our approach is conceptually simple. It works by using the climate response to a CO2 step change taken directly from a GCM experiment. With radiative forcing from non-CO2 constituents obtained by adapting the Forster and Taylor method, we use our method to estimate results for CMIP5 representative concentration pathway (RCP) experiments for cases not run by the GCMs. We estimate differences between pairs of RCPs rather than RCP anomalies relative to the pre-industrial state. This gives better results because it makes greater use of available GCM projections. The GCMs exhibit differences in radiative forcing, which we incorporate in the simple model. We analyse the thus-completed ensemble of RCP projections. The ensemble mean changes between 1986–2005 and 2080–2099 for global temperature (heat uptake) are, for RCP8.5: 3.8 K (2.3 × 1024 J); for RCP6.0: 2.3 K (1.6 × 1024 J); for RCP4.5: 2.0 K (1.6 × 1024 J); for RCP2.6: 1.1 K (1.3 × 1024 J). The relative spread (standard deviation/ensemble mean) for these scenarios is around 0.2 and 0.15 for temperature and heat uptake respectively. We quantify the relative effect of mitigation action, through reduced emissions, via the time-dependent ratios (change in RCPx)/(change in RCP8.5), using changes with respect to pre-industrial conditions. We find that the effects of mitigation on global-mean temperature change and heat uptake are very similar across these different GCMs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

IntFOLD is an independent web server that integrates our leading methods for structure and function prediction. The server provides a simple unified interface that aims to make complex protein modelling data more accessible to life scientists. The server web interface is designed to be intuitive and integrates a complex set of quantitative data, so that 3D modelling results can be viewed on a single page and interpreted by non-expert modellers at a glance. The only required input to the server is an amino acid sequence for the target protein. Here we describe major performance and user interface updates to the server, which comprises an integrated pipeline of methods for: tertiary structure prediction, global and local 3D model quality assessment, disorder prediction, structural domain prediction, function prediction and modelling of protein-ligand interactions. The server has been independently validated during numerous CASP (Critical Assessment of Techniques for Protein Structure Prediction) experiments, as well as being continuously evaluated by the CAMEO (Continuous Automated Model Evaluation) project. The IntFOLD server is available at: http://www.reading.ac.uk/bioinf/IntFOLD/

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a general approach based on nonequilibrium thermodynamics for bridging the gap between a well-defined microscopic model and the macroscopic rheology of particle-stabilised interfaces. Our approach is illustrated by starting with a microscopic model of hard ellipsoids confined to a planar surface, which is intended to simply represent a particle-stabilised fluid–fluid interface. More complex microscopic models can be readily handled using the methods outlined in this paper. From the aforementioned microscopic starting point, we obtain the macroscopic, constitutive equations using a combination of systematic coarse-graining, computer experiments and Hamiltonian dynamics. Exemplary numerical solutions of the constitutive equations are given for a variety of experimentally relevant flow situations to explore the rheological behaviour of our model. In particular, we calculate the shear and dilatational moduli of the interface over a wide range of surface coverages, ranging from the dilute isotropic regime, to the concentrated nematic regime.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We describe the creation of a data set describing changes related to the presence of ice sheets, including ice-sheet extent and height, ice-shelf extent, and the distribution and elevation of ice-free land at the Last Glacial Maximum (LGM), which were used in LGM experiments conducted as part of the fifth phase of the Coupled Modelling Intercomparison Project (CMIP5) and the third phase of the Palaeoclimate Modelling Intercomparison Project (PMIP3). The CMIP5/PMIP3 data sets were created from reconstructions made by three different groups, which were all obtained using a model-inversion approach but differ in the assumptions used in the modelling and in the type of data used as constraints. The ice-sheet extent in the Northern Hemisphere (NH) does not vary substantially between the three individual data sources. The difference in the topography of the NH ice sheets is also moderate, and smaller than the differences between these reconstructions (and the resultant composite reconstruction) and ice-sheet reconstructions used in previous generations of PMIP. Only two of the individual reconstructions provide information for Antarctica. The discrepancy between these two reconstructions is larger than the difference for the NH ice sheets, although still less than the difference between the composite reconstruction and previous PMIP ice-sheet reconstructions. Although largely confined to the ice-covered regions, differences between the climate response to the individual LGM reconstructions extend over the North Atlantic Ocean and Northern Hemisphere continents, partly through atmospheric stationary waves. Differences between the climate response to the CMIP5/PMIP3 composite and any individual ice-sheet reconstruction are smaller than those between the CMIP5/PMIP3 composite and the ice sheet used in the last phase of PMIP (PMIP2).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The goal of the Palaeoclimate Modelling Intercomparison Project (PMIP) is to understand the response of the climate system to changes in different climate forcings and to feedbacks. Through comparison with observations of the environmental impacts of these climate changes, or with climate reconstructions based on physical, chemical or biological records, PMIP also addresses the issue of how well state-of-the-art models simulate climate changes. Palaeoclimate states are radically different from those of the recent past documented by the instrumental record and thus provide an out-of-sample test of the models used for future climate projections and a way to assess whether they have the correct sensitivity to forcings and feedbacks. Five distinctly different periods have been selected as focus for the core palaeoclimate experiments that are designed to contribute to the objectives of the sixth phase of the Coupled Model Intercomparison Project (CMIP6). This manuscript describes the motivation for the choice of these periods and the design of the numerical experiments, with a focus upon their novel features compared to the experiments performed in previous phases of PMIP and CMIP as well as the benefits of common analyses of the models across multiple climate states. It also describes the information needed to document each experiment and the model outputs required for analysis and benchmarking.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Land use leads to massive habitat destruction and fragmentation in tropical forests. Despite its global dimensions the effects of fragmentation on ecosystem dynamics are not well understood due to the complexity of the problem. We present a simulation analysis performed by the individual-based model FORMIND. The model was applied to the Brazilian Atlantic Forest, one of the world`s biodiversity hot spots, at the Plateau of Sao Paulo. This study investigates the long-term effects of fragmentation processes on structure and dynamics of different sized remnant tropical forest fragments (1-100 ha) at community and plant functional type (PFT) level. We disentangle the interplay of single effects of different key fragmentation processes (edge mortality, increased mortality of large trees, local seed loss and external seed rain) using simulation experiments in a full factorial design. Our analysis reveals that particularly small forest fragments below 25 ha suffer substantial structural changes, biomass and biodiversity loss in the long term. At community level biomass is reduced up to 60%. Two thirds of the mid- and late-successional species groups, especially shade-tolerant (late successional climax) species groups are prone of extinction in small fragments. The shade-tolerant species groups were most strongly affected; its tree number was reduced more than 60% mainly by increased edge mortality. This process proved to be the most powerful of those investigated, explaining alone more than 80% of the changes observed for this group. External seed rain was able to compensate approximately 30% of the observed fragmentation effects for shade-tolerant species. Our results suggest that tropical forest fragments will suffer strong structural changes in the long term, leading to tree species impoverishment. They may reach a new equilibrium with a substantially reduced subset of the initial species pool, and are driven towards an earlier successional state. The natural regeneration potential of a landscape scattered with forest fragments appears to be limited, as external seed rain is not able to fully compensate for the observed fragmentation-induced changes. Our findings suggest basic recommendations for the management of fragmented tropical forest landscapes. (C) 2011 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

1. The objective of this study was to determine a metabolisable energy ( ME) requirement model for broiler breeder hens. The influence of temperature on ME requirements for maintenance was determined in experiments conducted in three environmental rooms with temperatures kept constant at 13, 21 and 30 degrees C using a comparative slaughter technique. The energy requirements for weight gain were determined based upon body energy content and efficiency of energy utilisation for weight gain. The energy requirements for egg production were determined on the basis of egg energy content and efficiency of energy deposition in the eggs.2. The following model was developed using these results: ME = kgW(0.75)(806.53 - 26.45T + 0.50T(2)) + 31.90G + 10.04EM, where kgW(0.75) is body weight (kg) raised to the power 0.75, T is temperature (degrees C), G is weight gain (g) and EM is egg mass (g).3. A feeding trial was conducted using 400 Hubbard Hi-Yield broiler breeder hens and 40 Peterson males from 31 to 46 weeks of age in order to compare use of the model with a recommended feeding programme for this strain of bird. The application of the model in breeder hens provided good productive and reproductive performance and better results in feed and energy conversion than in hens fed according to strain recommendation. In conclusion, the model evaluated predicted an ME intake which matched breeder hens' requirements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cephalosporin C production process optimization was studied based on four experiments carried out in an agitated and aerated tank fermenter operated as a fed-batch reactor. The microorganism Cephalosporium acremonium ATCC 48272 (C-10) was cultivated in a synthetic medium containing glucose as major carbon and energy source. The additional medium contained a hydrolyzed sucrose solution as the main carbon and energy source and it was added after the glucose depletion. By manipulating the supplementary feed rate, it was possible to increase antibiotic production. A mathematical model to represent the fed-batch production process was developed. It was observed that the model was applicable under different operation conditions, showing that optimization studies can be made based on this model. (C) 1999 Elsevier B.V. Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis investigates two distinct research topics. The main topic (Part I) is the computational modelling of cardiomyocytes derived from human stem cells, both embryonic (hESC-CM) and induced-pluripotent (hiPSC-CM). The aim of this research line lies in developing models of the electrophysiology of hESC-CM and hiPSC-CM in order to integrate the available experimental data and getting in-silico models to be used for studying/making new hypotheses/planning experiments on aspects not fully understood yet, such as the maturation process, the functionality of the Ca2+ hangling or why the hESC-CM/hiPSC-CM action potentials (APs) show some differences with respect to APs from adult cardiomyocytes. Chapter I.1 introduces the main concepts about hESC-CMs/hiPSC-CMs, the cardiac AP, and computational modelling. Chapter I.2 presents the hESC-CM AP model, able to simulate the maturation process through two developmental stages, Early and Late, based on experimental and literature data. Chapter I.3 describes the hiPSC-CM AP model, able to simulate the ventricular-like and atrial-like phenotypes. This model was used to assess which currents are responsible for the differences between the ventricular-like AP and the adult ventricular AP. The secondary topic (Part II) consists in the study of texture descriptors for biological image processing. Chapter II.1 provides an overview on important texture descriptors such as Local Binary Pattern or Local Phase Quantization. Moreover the non-binary coding and the multi-threshold approach are here introduced. Chapter II.2 shows that the non-binary coding and the multi-threshold approach improve the classification performance of cellular/sub-cellular part images, taken from six datasets. Chapter II.3 describes the case study of the classification of indirect immunofluorescence images of HEp2 cells, used for the antinuclear antibody clinical test. Finally the general conclusions are reported.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Basic concepts and definitions relative to Lagrangian Particle Dispersion Models (LPDMs)for the description of turbulent dispersion are introduced. The study focusses on LPDMs that use as input, for the large scale motion, fields produced by Eulerian models, with the small scale motions described by Lagrangian Stochastic Models (LSMs). The data of two different dynamical model have been used: a Large Eddy Simulation (LES) and a General Circulation Model (GCM). After reviewing the small scale closure adopted by the Eulerian model, the development and implementation of appropriate LSMs is outlined. The basic requirement of every LPDM used in this work is its fullfillment of the Well Mixed Condition (WMC). For the dispersion description in the GCM domain, a stochastic model of Markov order 0, consistent with the eddy-viscosity closure of the dynamical model, is implemented. A LSM of Markov order 1, more suitable for shorter timescales, has been implemented for the description of the unresolved motion of the LES fields. Different assumptions on the small scale correlation time are made. Tests of the LSM on GCM fields suggest that the use of an interpolation algorithm able to maintain an analytical consistency between the diffusion coefficient and its derivative is mandatory if the model has to satisfy the WMC. Also a dynamical time step selection scheme based on the diffusion coefficient shape is introduced, and the criteria for the integration step selection are discussed. Absolute and relative dispersion experiments are made with various unresolved motion settings for the LSM on LES data, and the results are compared with laboratory data. The study shows that the unresolved turbulence parameterization has a negligible influence on the absolute dispersion, while it affects the contribution of the relative dispersion and meandering to absolute dispersion, as well as the Lagrangian correlation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tonalite-trondhjemite-granodiorite (TTG) gneisses form up to two-thirds of the preserved Archean continental crust and there is considerable debate regarding the primary magmatic processes of the generation of these rocks. The popular theories indicate that these rocks were formed by partial melting of basaltic oceanic crust which was previously metamorphosed to garnet-amphibolite and/or eclogite facies conditions either at the base of thick oceanic crust or by subduction processes.rnThis study investigates a new aspect regarding the source rock for Archean continental crust which is inferred to have had a bulk compostion richer in magnesium (picrite) than present-day basaltic oceanic crust. This difference is supposed to originate from a higher geothermal gradient in the early Archean which may have induced higher degrees of partial melting in the mantle, which resulted in a thicker and more magnesian oceanic crust. rnThe methods used to investigate the role of a more MgO-rich source rock in the formation of TTG-like melts in the context of this new approach are mineral equilibria calculations with the software THERMOCALC and high-pressure experiments conducted from 10–20 kbar and 900–1100 °C, both combined in a forward modelling approach. Initially, P–T pseudosections for natural rock compositions with increasing MgO contents were calculated in the system NCFMASHTO (Na2O–CaO–FeO–MgO–Al2O3–SiO2–H2O–TiO2) to ascertain the metamorphic products from rocks with increasing MgO contents from a MORB up to a komatiite. A small number of previous experiments on komatiites showed the development of pyroxenite instead of eclogite and garnet-amphibolite during metamorphism and established that melts of these pyroxenites are of basaltic composition, thus again building oceanic crust instead of continental crust.rnThe P–T pseudosections calculated represent a continuous development of their metamorphic products from amphibolites and eclogites towards pyroxenites. On the basis of these calculations and the changes within the range of compositions, three picritic Models of Archean Oceanic Crust (MAOC) were established with different MgO contents (11, 13 and 15 wt%) ranging between basalt and komatiite. The thermodynamic modelling for MAOC 11, 13 and 15 at supersolidus conditions is imprecise since no appropriate melt model for metabasic rocks is currently available and the melt model for metapelitic rocks resulted in unsatisfactory calculations. The partially molten region is therfore covered by high-pressure experiments. The results of the experiments show a transition from predominantly tonalitic melts in MAOC 11 to basaltic melts in MAOC 15 and a solidus moving towards higher temperatures with increasing magnesium in the bulk composition. Tonalitic melts were generated in MAOC 11 and 13 at pressures up to 12.5 kbar in the presence of garnet, clinopyroxene, plagioclase plus/minus quartz (plus/minus orthopyroxene in the presence of quartz and at lower pressures) in the absence of amphibole but it could not be explicitly indicated whether the tonalitic melts coexisting with an eclogitic residue and rutile at 20 kbar do belong to the Archean TTG suite. Basaltic melts were generated predominantly in the presence of granulite facies residues such as amphibole plus/minus garnet, plagioclase, orthopyroxene that lack quartz in all MAOC compositions at pressures up to 15 kbar. rnThe tonalitic melts generated in MAOC 11 and 13 indicate that thicker oceanic crust with more magnesium than that of a modern basalt is also a viable source for the generation of TTG-like melts and therefore continental crust in the Archean. The experimental results are related to different geologic settings as a function of pressure. The favoured setting for the generation of early TTG-like melts at 15 kbar is the base of an oceanic crust thicker than existing today or by melting of slabs in shallow subduction zones, both without interaction of tonalic melts with the mantle. Tonalitic melts at 20 kbar may have been generated below the plagioclase stability by slab melting in deeper subduction zones that have developed with time during the progressive cooling of the Earth, but it is unlikely that those melts reached lower pressure levels without further mantle interaction.rn

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The thesis deals with numerical algorithms for fluid-structure interaction problems with application in blood flow modelling. It starts with a short introduction on the mathematical description of incompressible viscous flow with non-Newtonian viscosity and a moving linear viscoelastic structure. The mathematical model consists of the generalized Navier-Stokes equation used for the description of fluid flow and the generalized string model for structure movement. The arbitrary Lagrangian-Eulerian approach is used in order to take into account moving computational domain. A part of the thesis is devoted to the discussion on the non-Newtonian behaviour of shear-thinning fluids, which is in our case blood, and derivation of two non-Newtonian models frequently used in the blood flow modelling. Further we give a brief overview on recent fluid-structure interaction schemes with discussion about the difficulties arising in numerical modelling of blood flow. Our main contribution lies in numerical and experimental study of a new loosely-coupled partitioned scheme called the kinematic splitting fluid-structure interaction algorithm. We present stability analysis for a coupled problem of non-Newtonian shear-dependent fluids in moving domains with viscoelastic boundaries. Here, we assume both, the nonlinearity in convective as well is diffusive term. We analyse the convergence of proposed numerical scheme for a simplified fluid model of the Oseen type. Moreover, we present series of experiments including numerical error analysis, comparison of hemodynamic parameters for the Newtonian and non-Newtonian fluids and comparison of several physiologically relevant computational geometries in terms of wall displacement and wall shear stress. Numerical analysis and extensive experimental study for several standard geometries confirm reliability and accuracy of the proposed kinematic splitting scheme in order to approximate fluid-structure interaction problems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The research field of my PhD concerns mathematical modeling and numerical simulation, applied to the cardiac electrophysiology analysis at a single cell level. This is possible thanks to the development of mathematical descriptions of single cellular components, ionic channels, pumps, exchangers and subcellular compartments. Due to the difficulties of vivo experiments on human cells, most of the measurements are acquired in vitro using animal models (e.g. guinea pig, dog, rabbit). Moreover, to study the cardiac action potential and all its features, it is necessary to acquire more specific knowledge about single ionic currents that contribute to the cardiac activity. Electrophysiological models of the heart have become very accurate in recent years giving rise to extremely complicated systems of differential equations. Although describing the behavior of cardiac cells quite well, the models are computationally demanding for numerical simulations and are very difficult to analyze from a mathematical (dynamical-systems) viewpoint. Simplified mathematical models that capture the underlying dynamics to a certain extent are therefore frequently used. The results presented in this thesis have confirmed that a close integration of computational modeling and experimental recordings in real myocytes, as performed by dynamic clamp, is a useful tool in enhancing our understanding of various components of normal cardiac electrophysiology, but also arrhythmogenic mechanisms in a pathological condition, especially when fully integrated with experimental data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Heart diseases are the leading cause of death worldwide, both for men and women. However, the ionic mechanisms underlying many cardiac arrhythmias and genetic disorders are not completely understood, thus leading to a limited efficacy of the current available therapies and leaving many open questions for cardiac electrophysiologists. On the other hand, experimental data availability is still a great issue in this field: most of the experiments are performed in vitro and/or using animal models (e.g. rabbit, dog and mouse), even when the final aim is to better understand the electrical behaviour of in vivo human heart either in physiological or pathological conditions. Computational modelling constitutes a primary tool in cardiac electrophysiology: in silico simulations, based on the available experimental data, may help to understand the electrical properties of the heart and the ionic mechanisms underlying a specific phenomenon. Once validated, mathematical models can be used for making predictions and testing hypotheses, thus suggesting potential therapeutic targets. This PhD thesis aims to apply computational cardiac modelling of human single cell action potential (AP) to three clinical scenarios, in order to gain new insights into the ionic mechanisms involved in the electrophysiological changes observed in vitro and/or in vivo. The first context is blood electrolyte variations, which may occur in patients due to different pathologies and/or therapies. In particular, we focused on extracellular Ca2+ and its effect on the AP duration (APD). The second context is haemodialysis (HD) therapy: in addition to blood electrolyte variations, patients undergo a lot of other different changes during HD, e.g. heart rate, cell volume, pH, and sympatho-vagal balance. The third context is human hypertrophic cardiomyopathy (HCM), a genetic disorder characterised by an increased arrhythmic risk, and still lacking a specific pharmacological treatment.