997 resultados para ensemble modeling


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Pluripotency in human embryonic stem cells (hESCs) and induced pluripotent stem cells (iPSCs) is regulated by three transcription factors-OCT3/4, SOX2, and NANOG. To fully exploit the therapeutic potential of these cells it is essential to have a good mechanistic understanding of the maintenance of self-renewal and pluripotency. In this study, we demonstrate a powerful systems biology approach in which we first expand literature-based network encompassing the core regulators of pluripotency by assessing the behavior of genes targeted by perturbation experiments. We focused our attention on highly regulated genes encoding cell surface and secreted proteins as these can be more easily manipulated by the use of inhibitors or recombinant proteins. Qualitative modeling based on combining boolean networks and in silico perturbation experiments were employed to identify novel pluripotency-regulating genes. We validated Interleukin-11 (IL-11) and demonstrate that this cytokine is a novel pluripotency-associated factor capable of supporting self-renewal in the absence of exogenously added bFGF in culture. To date, the various protocols for hESCs maintenance require supplementation with bFGF to activate the Activin/Nodal branch of the TGFβ signaling pathway. Additional evidence supporting our findings is that IL-11 belongs to the same protein family as LIF, which is known to be necessary for maintaining pluripotency in mouse but not in human ESCs. These cytokines operate through the same gp130 receptor which interacts with Janus kinases. Our finding might explain why mESCs are in a more naïve cell state compared to hESCs and how to convert primed hESCs back to the naïve state. Taken together, our integrative modeling approach has identified novel genes as putative candidates to be incorporated into the expansion of the current gene regulatory network responsible for inducing and maintaining pluripotency.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

OBJECTIVE: Hierarchical modeling has been proposed as a solution to the multiple exposure problem. We estimate associations between metabolic syndrome and different components of antiretroviral therapy using both conventional and hierarchical models. STUDY DESIGN AND SETTING: We use discrete time survival analysis to estimate the association between metabolic syndrome and cumulative exposure to 16 antiretrovirals from four drug classes. We fit a hierarchical model where the drug class provides a prior model of the association between metabolic syndrome and exposure to each antiretroviral. RESULTS: One thousand two hundred and eighteen patients were followed for a median of 27 months, with 242 cases of metabolic syndrome (20%) at a rate of 7.5 cases per 100 patient years. Metabolic syndrome was more likely to develop in patients exposed to stavudine, but was less likely to develop in those exposed to atazanavir. The estimate for exposure to atazanavir increased from hazard ratio of 0.06 per 6 months' use in the conventional model to 0.37 in the hierarchical model (or from 0.57 to 0.81 when using spline-based covariate adjustment). CONCLUSION: These results are consistent with trials that show the disadvantage of stavudine and advantage of atazanavir relative to other drugs in their respective classes. The hierarchical model gave more plausible results than the equivalent conventional model.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Surface geological mapping, laboratory measurements of rock properties, and seismic reflection data are integrated through three-dimensional seismic modeling to determine the likely cause of upper crustal reflections and to elucidate the deep structure of the Penninic Alps in eastern Switzerland. Results indicate that the principal upper crustal reflections recorded on the south end of Swiss seismic line NFP20-EAST can be explained by the subsurface geometry of stacked basement nappes. In addition, modeling results provide improvements to structural maps based solely on surface trends and suggest the presence of previously unrecognized rock units in the subsurface. Construction of the initial model is based upon extrapolation of plunging surface. structures; velocities and densities are established by laboratory measurements of corresponding rock units. Iterative modification produces a best fit model that refines the definition of the subsurface geometry of major structures. We conclude that most reflections from the upper 20 km can be ascribed to the presence of sedimentary cover rocks (especially carbonates) and ophiolites juxtaposed against crystalline basement nappes. Thus, in this area, reflections appear to be principally due to first-order lithologic contrasts. This study also demonstrates not only the importance of three-dimensional effects (sideswipe) in interpreting seismic data, but also that these effects can be considered quantitatively through three-dimensional modeling.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The disintegration of recovered paper is the first operation in the preparation of recycled pulp. It is known that the defibering process follows a first order kinetics from which it is possible to obtain the disintegration kinetic constant (KD) by means of different ways. The disintegration constant can be obtained from the Somerville index results (%lsv and from the dissipated energy per volume unit (Ss). The %slv is related to the quantity of non-defibrated paper, as a measure of the non-disintegrated fiber residual (percentage of flakes), which is expressed in disintegration time units. In this work, disintegration kinetics from recycled coated paper has been evaluated, working at 20 revise rotor speed and for different fiber consistency (6, 8, 10, 12 and 14%). The results showed that the values of experimental disintegration kinetic constant, Ko, through the analysis of Somerville index, as function of time. Increased, the disintegration time was drastically reduced. The calculation of the disintegration kinetic constant (modelled Ko), extracted from the Rayleigh’s dissipation function, showed a good correlation with the experimental values using the evolution of the Somerville index or with the dissipated energy

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The disintegration of recovered paper is the first operation in the preparation of recycled pulp. It is known that the defibering process follows a first order kinetics from which it is possible to obtain the disintegration kinetic constant (KD) by means of different ways. The disintegration constant can be obtained from the Somerville index results (%lsv and from the dissipated energy per volume unit (Ss). The %slv is related to the quantity of non-defibrated paper, as a measure of the non-disintegrated fiber residual (percentage of flakes), which is expressed in disintegration time units. In this work, disintegration kinetics from recycled coated paper has been evaluated, working at 20 revise rotor speed and for different fiber consistency (6, 8, 10, 12 and 14%). The results showed that the values of experimental disintegration kinetic constant, Ko, through the analysis of Somerville index, as function of time. Increased, the disintegration time was drastically reduced. The calculation of the disintegration kinetic constant (modelled Ko), extracted from the Rayleigh’s dissipation function, showed a good correlation with the experimental values using the evolution of the Somerville index or with the dissipated energy

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Knowledge about spatial biodiversity patterns is a basic criterion for reserve network design. Although herbarium collections hold large quantities of information, the data are often scattered and cannot supply complete spatial coverage. Alternatively, herbarium data can be used to fit species distribution models and their predictions can be used to provide complete spatial coverage and derive species richness maps. Here, we build on previous effort to propose an improved compositionalist framework for using species distribution models to better inform conservation management. We illustrate the approach with models fitted with six different methods and combined using an ensemble approach for 408 plant species in a tropical and megadiverse country (Ecuador). As a complementary view to the traditional richness hotspots methodology, consisting of a simple stacking of species distribution maps, the compositionalist modelling approach used here combines separate predictions for different pools of species to identify areas of alternative suitability for conservation. Our results show that the compositionalist approach better captures the established protected areas than the traditional richness hotspots strategies and allows the identification of areas in Ecuador that would optimally complement the current protection network. Further studies should aim at refining the approach with more groups and additional species information.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Modeling the mechanisms that determine how humans and other agents choose among different behavioral and cognitive processes-be they strategies, routines, actions, or operators-represents a paramount theoretical stumbling block across disciplines, ranging from the cognitive and decision sciences to economics, biology, and machine learning. By using the cognitive and decision sciences as a case study, we provide an introduction to what is also known as the strategy selection problem. First, we explain why many researchers assume humans and other animals to come equipped with a repertoire of behavioral and cognitive processes. Second, we expose three descriptive, predictive, and prescriptive challenges that are common to all disciplines which aim to model the choice among these processes. Third, we give an overview of different approaches to strategy selection. These include cost‐benefit, ecological, learning, memory, unified, connectionist, sequential sampling, and maximization approaches. We conclude by pointing to opportunities for future research and by stressing that the selection problem is far from being resolved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Les crues et les risques de débordement des barrages, notamment des digues en terre, en cas de fortes précipitations, préoccupent depuis longtemps les autorités et la population. Les études réalisées dans les dernières années ont montré que le réchauffement global du climat s'est accompagné d'une augmentation de la fréquence des fortes précipitations et des crues en Suisse et dans de nombreuses régions du globe durant le 20ème siècle. Les modèles climatiques globaux et régionaux prévoient que la fréquence des fortes précipitations devrait continuer à croître durant le 21éme siècle en Suisse et dans le monde. Cela rend les recherches actuelles sur la modélisation des pluies et des crues à une échelle fine encore plus importantes. En Suisse, pour assurer une bonne protection sur le plan humain et économique, des cartes de précipitations maximales probables (PMP) ont été réalisées. Les PMP ont été confrontées avec les précipitations extrêmes mesurées dans les différentes régions du pays. Ces PMP sont ensuite utilisées par les modèles hydrologiques pour calculer des crues maximales probables (PMF). Cette la méthode PMP-PMF nécessite toutefois un certain nombre de précautions. Si elle est appliquée d'une manière incorrecte ou sur la base de données insuffisantes, elle peut entraîner une surestimation des débits de crue, notamment pour les grands bassins et pour les régions montagneuses entraînant des surcoûts importants. Ces problèmes résultent notamment du fait que la plupart des modèles hydrologiques répartissent les précipitations extrêmes (PMP) de manière uniforme dans le temps sur l'ensemble du bassin versant. Pour remédier ce problème, cette thèse a comme objectif principal de développer un modèle hydrologique distribué appelé MPF (Modeling Precipitation Flood) capable d'estimer la PMF de manière réaliste à partir de la PMP distribuée de manière spatio-temporelle à l'aide des nuages. Le modèle développé MPF comprend trois parties importantes. Dans la première partie, les précipitations extrêmes calculées par un modèle météorologique à une méso-échelle avec une résolution horizontale de 2 km, sont réparties à une échelle locale (25 ou 50 m) de manière non-uniforme dans l'espace et dans le temps. La deuxième partie concerne la modélisation de l'écoulement de l'eau en surface et en subsurface en incluant l'infiltration et l'exfiltration. Et la troisième partie inclut la modélisation de la fonte des neiges, basée sur un calcul de transfert de chaleur. Le modèle MPF a été calibré sur des bassins versants alpins où les données de précipitations et du débit sont disponibles pour une période considérablement longue, qui inclut plusieurs épisodes de fortes pluies avec des débits élevés. À partir de ces épisodes, les paramètres d'entrée du modèle tel que la rugosité du sol et la largeur moyenne des cours d'eau dans le cas d'écoulement de surface ont pu être estimés. Suivant la même procédure, les paramètres utilisés dans la simulation des écoulements en subsurface sont également estimés indirectement, puisque des mesures directes de l'écoulement en subsurface et de l'exfiltration sont difficiles à obtenir. Le modèle de distribution spatio-temporelle de la pluie a aussi été validé en utilisant les images radar avec la structure de la pluie provoquée par un nuage supercellulaire. Les hyétogrammes obtenus sur plusieurs points du terrain sont très proches de ceux enregistrées avec les images radar. Les résultats de la validation du modèle sur les épisodes de fortes crues présentent une bonne synchronisation entre le débit simulé et le débit observé. Cette corrélation a été mesurée avec trois critères d'efficacité, qui ont tous donné des valeurs satisfaisantes. Cela montre que le modèle développé est valide et il peut être utilisé pour des épisodes extrêmes tels que la PMP. Des simulations ont été faites sur plusieurs bassins ayant comme données d'entrée la pluie de type PMP. Des conditions variées ont été utilisées, comme la situation du sol saturé, ou non-saturé, ou la présence d'une couche de neige sur le terrain au moment de la PMP, ce qui conduit à une estimation de PMF pour des scénarios catastrophiques. Enfin, les résultats obtenus montrent comment mieux estimer la valeur de la crue de sécurité des barrages, à partir d'une pluie extrême dix-millennale avec une période de retour de 10'000 ans.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The material presented in the these notes covers the sessions Modelling of electromechanical systems, Passive control theory I and Passive control theory II of the II EURON/GEOPLEX Summer School on Modelling and Control of Complex Dynamical Systems.We start with a general description of what an electromechanical system is from a network modelling point of view. Next, a general formulation in terms of PHDS is introduced, and some of the previous electromechanical systems are rewritten in this formalism. Power converters, which are variable structure systems (VSS), can also be given a PHDS form.We conclude the modelling part of these lectures with a rather complex example, showing the interconnection of subsystems from several domains, namely an arrangement to temporally store the surplus energy in a section of a metropolitan transportation system based on dc motor vehicles, using either arrays of supercapacitors or an electric poweredflywheel. The second part of the lectures addresses control of PHD systems. We first present the idea of control as power connection of a plant and a controller. Next we discuss how to circumvent this obstacle and present the basic ideas of Interconnection and Damping Assignment (IDA) passivity-based control of PHD systems.