77 resultados para physically based modeling

em Université de Lausanne, Switzerland


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Computational modeling has become a widely used tool for unraveling the mechanisms of higher level cooperative cell behavior during vascular morphogenesis. However, experimenting with published simulation models or adding new assumptions to those models can be daunting for novice and even for experienced computational scientists. Here, we present a step-by-step, practical tutorial for building cell-based simulations of vascular morphogenesis using the Tissue Simulation Toolkit (TST). The TST is a freely available, open-source C++ library for developing simulations with the two-dimensional cellular Potts model, a stochastic, agent-based framework to simulate collective cell behavior. We will show the basic use of the TST to simulate and experiment with published simulations of vascular network formation. Then, we will present step-by-step instructions and explanations for building a recent simulation model of tumor angiogenesis. Demonstrated mechanisms include cell-cell adhesion, chemotaxis, cell elongation, haptotaxis, and haptokinesis.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Methods like Event History Analysis can show the existence of diffusion and part of its nature, but do not study the process itself. Nowadays, thanks to the increasing performance of computers, processes can be studied using computational modeling. This thesis presents an agent-based model of policy diffusion mainly inspired from the model developed by Braun and Gilardi (2006). I first start by developing a theoretical framework of policy diffusion that presents the main internal drivers of policy diffusion - such as the preference for the policy, the effectiveness of the policy, the institutional constraints, and the ideology - and its main mechanisms, namely learning, competition, emulation, and coercion. Therefore diffusion, expressed by these interdependencies, is a complex process that needs to be studied with computational agent-based modeling. In a second step, computational agent-based modeling is defined along with its most significant concepts: complexity and emergence. Using computational agent-based modeling implies the development of an algorithm and its programming. When this latter has been developed, we let the different agents interact. Consequently, a phenomenon of diffusion, derived from learning, emerges, meaning that the choice made by an agent is conditional to that made by its neighbors. As a result, learning follows an inverted S-curve, which leads to partial convergence - global divergence and local convergence - that triggers the emergence of political clusters; i.e. the creation of regions with the same policy. Furthermore, the average effectiveness in this computational world tends to follow a J-shaped curve, meaning that not only time is needed for a policy to deploy its effects, but that it also takes time for a country to find the best-suited policy. To conclude, diffusion is an emergent phenomenon from complex interactions and its outcomes as ensued from my model are in line with the theoretical expectations and the empirical evidence.Les méthodes d'analyse de biographie (event history analysis) permettent de mettre en évidence l'existence de phénomènes de diffusion et de les décrire, mais ne permettent pas d'en étudier le processus. Les simulations informatiques, grâce aux performances croissantes des ordinateurs, rendent possible l'étude des processus en tant que tels. Cette thèse, basée sur le modèle théorique développé par Braun et Gilardi (2006), présente une simulation centrée sur les agents des phénomènes de diffusion des politiques. Le point de départ de ce travail met en lumière, au niveau théorique, les principaux facteurs de changement internes à un pays : la préférence pour une politique donnée, l'efficacité de cette dernière, les contraintes institutionnelles, l'idéologie, et les principaux mécanismes de diffusion que sont l'apprentissage, la compétition, l'émulation et la coercition. La diffusion, définie par l'interdépendance des différents acteurs, est un système complexe dont l'étude est rendue possible par les simulations centrées sur les agents. Au niveau méthodologique, nous présenterons également les principaux concepts sous-jacents aux simulations, notamment la complexité et l'émergence. De plus, l'utilisation de simulations informatiques implique le développement d'un algorithme et sa programmation. Cette dernière réalisée, les agents peuvent interagir, avec comme résultat l'émergence d'un phénomène de diffusion, dérivé de l'apprentissage, où le choix d'un agent dépend en grande partie de ceux faits par ses voisins. De plus, ce phénomène suit une courbe en S caractéristique, poussant à la création de régions politiquement identiques, mais divergentes au niveau globale. Enfin, l'efficacité moyenne, dans ce monde simulé, suit une courbe en J, ce qui signifie qu'il faut du temps, non seulement pour que la politique montre ses effets, mais également pour qu'un pays introduise la politique la plus efficace. En conclusion, la diffusion est un phénomène émergent résultant d'interactions complexes dont les résultats du processus tel que développé dans ce modèle correspondent tant aux attentes théoriques qu'aux résultats pratiques.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Data characteristics and species traits are expected to influence the accuracy with which species' distributions can be modeled and predicted. We compare 10 modeling techniques in terms of predictive power and sensitivity to location error, change in map resolution, and sample size, and assess whether some species traits can explain variation in model performance. We focused on 30 native tree species in Switzerland and used presence-only data to model current distribution, which we evaluated against independent presence-absence data. While there are important differences between the predictive performance of modeling methods, the variance in model performance is greater among species than among techniques. Within the range of data perturbations in this study, some extrinsic parameters of data affect model performance more than others: location error and sample size reduced performance of many techniques, whereas grain had little effect on most techniques. No technique can rescue species that are difficult to predict. The predictive power of species-distribution models can partly be predicted from a series of species characteristics and traits based on growth rate, elevational distribution range, and maximum elevation. Slow-growing species or species with narrow and specialized niches tend to be better modeled. The Swiss presence-only tree data produce models that are reliable enough to be useful in planning and management applications.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Temperature reconstructions for recent centuries are the basis of estimations of the natural variability in the climate system before and during the onset of anthropogenic perturbation. Here we present, for the first time, an independent and physically based reconstruction of mean annual temperature over the past half millennium obtained from groundwater in France. The reconstructed noble gas temperature (NGT) record suggests cooler than present climate conditions throughout the 16th-19th centuries. Periods of warming occur in the 17th-18th and 20th century, while cooling is reconstructed in the 19th century. A noticeable coincidence with other temperature records is demonstrated. Deuterium excess varies in parallel with the NGT, and indicates variation in the seasonality of the aquifer recharge; whereas high excess air in groundwater indicates periods with high oscillations of the water table.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Estimation of the dimensions of fluvial geobodies from core data is a notoriously difficult problem in reservoir modeling. To try and improve such estimates and, hence, reduce uncertainty in geomodels, data on dunes, unit bars, cross-bar channels, and compound bars and their associated deposits are presented herein from the sand-bed braided South Saskatchewan River, Canada. These data are used to test models that relate the scale of the formative bed forms to the dimensions of the preserved deposits and, therefore, provide an insight as to how such deposits may be preserved over geologic time. The preservation of bed-form geometry is quantified by comparing the Alluvial architecture above and below the maximum erosion depth of the modem channel deposits. This comparison shows that there is no significant difference in the mean set thickness of dune cross-strata above and below the basal erosion surface of the contemporary channel, thus suggesting that dimensional relationships between dune deposits and the formative bed-form dimensions are likely to be valid from both recent and older deposits. The data show that estimates of mean bankfull flow depth derived from dune, unit bar, and cross-bar channel deposits are all very similar. Thus, the use of all these metrics together can provide a useful check that all components and scales of the alluvial architecture have been identified correctly when building reservoir models. The data also highlight several practical issues with identifying and applying data relating to cross-strata. For example, the deposits of unit bars were found to be severely truncated in length and width, with only approximately 10% of the mean bar-form length remaining, and thus making identification in section difficult. For similar reasons, the deposits of compound bars were found to be especially difficult to recognize, and hence, estimates of channel depth based on this method may be problematic. Where only core data are available (i.e., no outcrop data exist), formative flow depths are suggested to be best reconstructed using cross-strata formed by dunes. However, theoretical relationships between the distribution of set thicknesses and formative dune height are found to result in slight overestimates of the latter and, hence, mean bankfull flow depths derived from these measurements. This article illustrates that the preservation of fluvial cross-strata and, thus, the paleohydraulic inferences that can be drawn from them, are a function of the ratio of the size and migration rate of bed forms and the time scale of aggradation and channel migration. These factors must thus be considered when deciding on appropriate length:thickness ratios for the purposes of object-based modeling in reservoir characterization.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Snow cover is an important control in mountain environments and a shift of the snow-free period triggered by climate warming can strongly impact ecosystem dynamics. Changing snow patterns can have severe effects on alpine plant distribution and diversity. It thus becomes urgent to provide spatially explicit assessments of snow cover changes that can be incorporated into correlative or empirical species distribution models (SDMs). Here, we provide for the first time a with a lower overestimation comparison of two physically based snow distribution models (PREVAH and SnowModel) to produce snow cover maps (SCMs) at a fine spatial resolution in a mountain landscape in Austria. SCMs have been evaluated with SPOT-HRVIR images and predictions of snow water equivalent from the two models with ground measurements. Finally, SCMs of the two models have been compared under a climate warming scenario for the end of the century. The predictive performances of PREVAH and SnowModel were similar when validated with the SPOT images. However, the tendency to overestimate snow cover was slightly lower with SnowModel during the accumulation period, whereas it was lower with PREVAH during the melting period. The rate of true positives during the melting period was two times higher on average with SnowModel with a lower overestimation of snow water equivalent. Our results allow for recommending the use of SnowModel in SDMs because it better captures persisting snow patches at the end of the snow season, which is important when modelling the response of species to long-lasting snow cover and evaluating whether they might survive under climate change.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A factor limiting preliminary rockfall hazard mapping at regional scale is often the lack of knowledge of potential source areas. Nowadays, high resolution topographic data (LiDAR) can account for realistic landscape details even at large scale. With such fine-scale morphological variability, quantitative geomorphometric analyses become a relevant approach for delineating potential rockfall instabilities. Using digital elevation model (DEM)-based ?slope families? concept over areas of similar lithology and cliffs and screes zones available from the 1:25,000 topographic map, a susceptibility rockfall hazard map was drawn up in the canton of Vaud, Switzerland, in order to provide a relevant hazard overview. Slope surfaces over morphometrically-defined thresholds angles were considered as rockfall source zones. 3D modelling (CONEFALL) was then applied on each of the estimated source zones in order to assess the maximum runout length. Comparison with known events and other rockfall hazard assessments are in good agreement, showing that it is possible to assess rockfall activities over large areas from DEM-based parameters and topographical elements.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Metabolic problems lead to numerous failures during clinical trials, and much effort is now devoted to developing in silico models predicting metabolic stability and metabolites. Such models are well known for cytochromes P450 and some transferases, whereas less has been done to predict the activity of human hydrolases. The present study was undertaken to develop a computational approach able to predict the hydrolysis of novel esters by human carboxylesterase hCES2. The study involved first a homology modeling of the hCES2 protein based on the model of hCES1 since the two proteins share a high degree of homology (congruent with 73%). A set of 40 known substrates of hCES2 was taken from the literature; the ligands were docked in both their neutral and ionized forms using GriDock, a parallel tool based on the AutoDock4.0 engine which can perform efficient and easy virtual screening analyses of large molecular databases exploiting multi-core architectures. Useful statistical models (e.g., r (2) = 0.91 for substrates in their unprotonated state) were calculated by correlating experimental pK(m) values with distance between the carbon atom of the substrate's ester group and the hydroxy function of Ser228. Additional parameters in the equations accounted for hydrophobic and electrostatic interactions between substrates and contributing residues. The negatively charged residues in the hCES2 cavity explained the preference of the enzyme for neutral substrates and, more generally, suggested that ligands which interact too strongly by ionic bonds (e.g., ACE inhibitors) cannot be good CES2 substrates because they are trapped in the cavity in unproductive modes and behave as inhibitors. The effects of protonation on substrate recognition and the contrasting behavior of substrates and products were finally investigated by MD simulations of some CES2 complexes.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We have previously shown that a 28-amino acid peptide derived from the BRC4 motif of BRCA2 tumor suppressor inhibits selectively human RAD51 recombinase (HsRad51). With the aim of designing better inhibitors for cancer treatment, we combined an in silico docking approach with in vitro biochemical testing to construct a highly efficient chimera peptide from eight existing human BRC motifs. We built a molecular model of all BRC motifs complexed with HsRad51 based on the crystal structure of the BRC4 motif-HsRad51 complex, computed the interaction energy of each residue in each BRC motif, and selected the best amino acid residue at each binding position. This analysis enabled us to propose four amino acid substitutions in the BRC4 motif. Three of these increased the inhibitory effect in vitro, and this effect was found to be additive. We thus obtained a peptide that is about 10 times more efficient in inhibiting HsRad51-ssDNA complex formation than the original peptide.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A remarkable feature of the carcinogenicity of inorganic arsenic is that while human exposures to high concentrations of inorganic arsenic in drinking water are associated with increases in skin, lung, and bladder cancer, inorganic arsenic has not typically caused tumors in standard laboratory animal test protocols. Inorganic arsenic administered for periods of up to 2 yr to various strains of laboratory mice, including the Swiss CD-1, Swiss CR:NIH(S), C57Bl/6p53(+/-), and C57Bl/6p53(+/+), has not resulted in significant increases in tumor incidence. However, Ng et al. (1999) have reported a 40% tumor incidence in C57Bl/6J mice exposed to arsenic in their drinking water throughout their lifetime, with no tumors reported in controls. In order to investigate the potential role of tissue dosimetry in differential susceptibility to arsenic carcinogenicity, a physiologically based pharmacokinetic (PBPK) model for inorganic arsenic in the rat, hamster, monkey, and human (Mann et al., 1996a, 1996b) was extended to describe the kinetics in the mouse. The PBPK model was parameterized in the mouse using published data from acute exposures of B6C3F1 mice to arsenate, arsenite, monomethylarsonic acid (MMA), and dimethylarsinic acid (DMA) and validated using data from acute exposures of C57Black mice. Predictions of the acute model were then compared with data from chronic exposures. There was no evidence of changes in the apparent volume of distribution or in the tissue-plasma concentration ratios between acute and chronic exposure that might support the possibility of inducible arsenite efflux. The PBPK model was also used to project tissue dosimetry in the C57Bl/6J study, in comparison with tissue levels in studies having shorter duration but higher arsenic treatment concentrations. The model evaluation indicates that pharmacokinetic factors do not provide an explanation for the difference in outcomes across the various mouse bioassays. Other possible explanations may relate to strain-specific differences, or to the different durations of dosing in each of the mouse studies, given the evidence that inorganic arsenic is likely to be active in the later stages of the carcinogenic process. [Authors]

Relevância:

40.00% 40.00%

Publicador:

Resumo:

PURPOSE: Few studies compare the variabilities that characterize environmental (EM) and biological monitoring (BM) data. Indeed, comparing their respective variabilities can help to identify the best strategy for evaluating occupational exposure. The objective of this study is to quantify the biological variability associated with 18 bio-indicators currently used in work environments. METHOD: Intra-individual (BV(intra)), inter-individual (BV(inter)), and total biological variability (BV(total)) were quantified using validated physiologically based toxicokinetic (PBTK) models coupled with Monte Carlo simulations. Two environmental exposure profiles with different levels of variability were considered (GSD of 1.5 and 2.0). RESULTS: PBTK models coupled with Monte Carlo simulations were successfully used to predict the biological variability of biological exposure indicators. The predicted values follow a lognormal distribution, characterized by GSD ranging from 1.1 to 2.3. Our results show that there is a link between biological variability and the half-life of bio-indicators, since BV(intra) and BV(total) both decrease as the biological indicator half-lives increase. BV(intra) is always lower than the variability in the air concentrations. On an individual basis, this means that the variability associated with the measurement of biological indicators is always lower than the variability characterizing airborne levels of contaminants. For a group of workers, BM is less variable than EM for bio-indicators with half-lives longer than 10-15 h. CONCLUSION: The variability data obtained in the present study can be useful in the development of BM strategies for exposure assessment and can be used to calculate the number of samples required for guiding industrial hygienists or medical doctors in decision-making.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Summary The specific CD8+ T cell immune response against tumors relies on the recognition by the T cell receptor (TCR) on cytotoxic T lymphocytes (CTL) of antigenic peptides bound to the class I major histocompatibility complex (MHC) molecule. Such tumor associated antigenic peptides are the focus of tumor immunotherapy with peptide vaccines. The strategy for obtaining an improved immune response often involves the design of modified tumor associated antigenic peptides. Such modifications aim at creating higher affinity and/or degradation resistant peptides and require precise structures of the peptide-MHC class I complex. In addition, the modified peptide must be cross-recognized by CTLs specific for the parental peptide, i.e. preserve the structure of the epitope. Detailed structural information on the modified peptide in complex with MHC is necessary for such predictions. In this thesis, the main focus is the development of theoretical in silico methods for prediction of both structure and cross-reactivity of peptide-MHC class I complexes. Applications of these methods in the context of immunotherapy are also presented. First, a theoretical method for structure prediction of peptide-MHC class I complexes is developed and validated. The approach is based on a molecular dynamics protocol to sample the conformational space of the peptide in its MHC environment. The sampled conformers are evaluated using conformational free energy calculations. The method, which is evaluated for its ability to reproduce 41 X-ray crystallographic structures of different peptide-MHC class I complexes, shows an overall prediction success of 83%. Importantly, in the clinically highly relevant subset of peptide-HLAA*0201 complexes, the prediction success is 100%. Based on these structure predictions, a theoretical approach for prediction of cross-reactivity is developed and validated. This method involves the generation of quantitative structure-activity relationships using three-dimensional molecular descriptors and a genetic neural network. The generated relationships are highly predictive as proved by high cross-validated correlation coefficients (0.78-0.79). Together, the here developed theoretical methods open the door for efficient rational design of improved peptides to be used in immunotherapy. Résumé La réponse immunitaire spécifique contre des tumeurs dépend de la reconnaissance par les récepteurs des cellules T CD8+ de peptides antigéniques présentés par les complexes majeurs d'histocompatibilité (CMH) de classe I. Ces peptides sont utilisés comme cible dans l'immunothérapie par vaccins peptidiques. Afin d'augmenter la réponse immunitaire, les peptides sont modifiés de façon à améliorer l'affinité et/ou la résistance à la dégradation. Ceci nécessite de connaître la structure tridimensionnelle des complexes peptide-CMH. De plus, les peptides modifiés doivent être reconnus par des cellules T spécifiques du peptide natif. La structure de l'épitope doit donc être préservée et des structures détaillées des complexes peptide-CMH sont nécessaires. Dans cette thèse, le thème central est le développement des méthodes computationnelles de prédiction des structures des complexes peptide-CMH classe I et de la reconnaissance croisée. Des applications de ces méthodes de prédiction à l'immunothérapie sont également présentées. Premièrement, une méthode théorique de prédiction des structures des complexes peptide-CMH classe I est développée et validée. Cette méthode est basée sur un échantillonnage de l'espace conformationnel du peptide dans le contexte du récepteur CMH classe I par dynamique moléculaire. Les conformations sont évaluées par leurs énergies libres conformationnelles. La méthode est validée par sa capacité à reproduire 41 structures des complexes peptide-CMH classe I obtenues par cristallographie aux rayons X. Le succès prédictif général est de 83%. Pour le sous-groupe HLA-A*0201 de complexes de grande importance pour l'immunothérapie, ce succès est de 100%. Deuxièmement, à partir de ces structures prédites in silico, une méthode théorique de prédiction de la reconnaissance croisée est développée et validée. Celle-ci consiste à générer des relations structure-activité quantitatives en utilisant des descripteurs moléculaires tridimensionnels et un réseau de neurones couplé à un algorithme génétique. Les relations générées montrent une capacité de prédiction remarquable avec des valeurs de coefficients de corrélation de validation croisée élevées (0.78-0.79). Les méthodes théoriques développées dans le cadre de cette thèse ouvrent la voie du design de vaccins peptidiques améliorés.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Because of the increase in workplace automation and the diversification of industrial processes, workplaces have become more and more complex. The classical approaches used to address workplace hazard concerns, such as checklists or sequence models, are, therefore, of limited use in such complex systems. Moreover, because of the multifaceted nature of workplaces, the use of single-oriented methods, such as AEA (man oriented), FMEA (system oriented), or HAZOP (process oriented), is not satisfactory. The use of a dynamic modeling approach in order to allow multiple-oriented analyses may constitute an alternative to overcome this limitation. The qualitative modeling aspects of the MORM (man-machine occupational risk modeling) model are discussed in this article. The model, realized on an object-oriented Petri net tool (CO-OPN), has been developed to simulate and analyze industrial processes in an OH&S perspective. The industrial process is modeled as a set of interconnected subnets (state spaces), which describe its constitutive machines. Process-related factors are introduced, in an explicit way, through machine interconnections and flow properties. While man-machine interactions are modeled as triggering events for the state spaces of the machines, the CREAM cognitive behavior model is used in order to establish the relevant triggering events. In the CO-OPN formalism, the model is expressed as a set of interconnected CO-OPN objects defined over data types expressing the measure attached to the flow of entities transiting through the machines. Constraints on the measures assigned to these entities are used to determine the state changes in each machine. Interconnecting machines implies the composition of such flow and consequently the interconnection of the measure constraints. This is reflected by the construction of constraint enrichment hierarchies, which can be used for simulation and analysis optimization in a clear mathematical framework. The use of Petri nets to perform multiple-oriented analysis opens perspectives in the field of industrial risk management. It may significantly reduce the duration of the assessment process. But, most of all, it opens perspectives in the field of risk comparisons and integrated risk management. Moreover, because of the generic nature of the model and tool used, the same concepts and patterns may be used to model a wide range of systems and application fields.