60 resultados para GIS BASED SIMULATION
em Université de Lausanne, Switzerland
Resumo:
The study investigates the possibility to incorporate fracture intensity and block geometry as spatially continuous parameters in GIS-based systems. For this purpose, a deterministic method has been implemented to estimate block size (Bloc3D) and joint frequency (COLTOP). In addition to measuring the block size, the Bloc3D Method provides a 3D representation of the shape of individual blocks. These two methods were applied using field measurements (joint set orientation and spacing) performed over a large field area, in the Swiss Alps. This area is characterized by a complex geology, a number of different rock masses and varying degrees of metamorphism. The spatial variability of the parameters was evaluated with regard to lithology and major faults. A model incorporating these measurements and observations into a GIS system to assess the risk associated with rock falls is proposed. The analysis concludes with a discussion on the feasibility of such an application in regularly and irregularly jointed rock masses, with persistent and impersistent discontinuities.
Resumo:
Rockfall propagation areas can be determined using a simple geometric rule known as shadow angle or energy line method based on a simple Coulomb frictional model implemented in the CONEFALL computer program. Runout zones are estimated from a digital terrain model (DTM) and a grid file containing the cells representing rockfall potential source areas. The cells of the DTM that are lowest in altitude and located within a cone centered on a rockfall source cell belong to the potential propagation area associated with that grid cell. In addition, the CONEFALL method allows estimation of mean and maximum velocities and energies of blocks in the rockfall propagation areas. Previous studies indicate that the slope angle cone ranges from 27° to 37° depending on the assumptions made, i.e. slope morphology, probability of reaching a point, maximum run-out, field observations. Different solutions based on previous work and an example of an actual rockfall event are presented here.
Resumo:
Rapport de synthèse : L'article qui fait l'objet de ma thèse évalue une nouvelle approche pédagogique pour l'apprentissage de certains chapitres de physiopathologie. Le dispositif pédagogique se base sur l'alternance d'apprentissage ex-cathedra et de l'utilisation d'un site web comprenant des vignettes cliniques. Lors de la consultation de ces-dernières, l'étudiant est invité à demander des examens de laboratoire dont il pourrait justifier la pertinence selon le cas clinique étudié. La nouveauté du procédé réside dans le fait que, préalablement à son cours ex-cathedra, l'enseignant peut consulter les statistiques de demandes de laboratoire et ainsi orienter son cours selon les éléments mal compris par les étudiants. A la suite du cours ex-cathedra, les étudiants peuvent consulter sur internet la vignette clinique complète avec des explications. A l'issue de tout le cours, une évaluation auprès des étudiants a été conduite. Le procédé a été mis en place durant deux années consécutives et l'article en discute notamment les résultats. Nous avons pu conclure que cette méthode innovatrice d'enseignement amène les étudiants à mieux se préparer pour les cours ex-cathedra tout en permettant à l'enseignant d'identifier plus précisément quelles thématiques étaient difficiles pour les étudiants et donc d'ajuster au mieux son cours. Mon travail de thèse a consisté à créer ce dispositif d'apprentissage, à créer l'application web des vignettes cliniques et à l'implanter durant deux années consécutives. J'ai ensuite analysé les données des évaluations et écrit l'article que j'ai présenté à la revue 'Medical Teacher'. Après quelques corrections et précisions demandées par le comité de lecture, l'article a été accepté et publié. Ce travail a débouché sur une seconde version de l'application web qui est actuellement utilisée lors du module 3.1 de 3è année à l'Ecole de Médecine à Lausanne. Summary : Since the early days of sexual selection, our understanding of the selective forces acting on males and females during reproduction has increased remarkably. However, despite a long tradition of experimental and theoretical work in this field and relentless effort, numerous questions remain unanswered and many results are conflicting. Moreover, the interface between sexual selection and conservation biology has to date received little attention, despite existing evidence for its importance. In the present thesis, I first used an empirical approach to test various sexual selection hypotheses in a population of whitefish of central Switzerland. This precise population is characterized by a high prevalence of gonadal alterations in males. In particular, I challenged the hypothesis that whitefish males displaying peculiar gonadal features are of lower genetic quality than other seemingly normal males. Additionally, I also worked on identifying important determinant of sperm behavior. During a second theoretical part of my work, which is part of a larger project on the evolution of female mate preferences in harvested fish populations, I developed an individual-based simulation model to estimate how different mate discrimination costs affect the demographical behavior of fish populations and the evolutionary trajectories of female mate preferences. This latter work provided me with some insight on a recently published article addressing the importance of sexual selection for harvesting-induced evolution. I built upon this insight in a short perspective paper. In parallel, I let some methodological questions drive my thoughts, and wrote an essay about possible synergies between the biological, the philosophical and the statistical approach to biological questions.
Resumo:
Debris flow hazard modelling at medium (regional) scale has been subject of various studies in recent years. In this study, hazard zonation was carried out, incorporating information about debris flow initiation probability (spatial and temporal), and the delimitation of the potential runout areas. Debris flow hazard zonation was carried out in the area of the Consortium of Mountain Municipalities of Valtellina di Tirano (Central Alps, Italy). The complexity of the phenomenon, the scale of the study, the variability of local conditioning factors, and the lacking data limited the use of process-based models for the runout zone delimitation. Firstly, a map of hazard initiation probabilities was prepared for the study area, based on the available susceptibility zoning information, and the analysis of two sets of aerial photographs for the temporal probability estimation. Afterwards, the hazard initiation map was used as one of the inputs for an empirical GIS-based model (Flow-R), developed at the University of Lausanne (Switzerland). An estimation of the debris flow magnitude was neglected as the main aim of the analysis was to prepare a debris flow hazard map at medium scale. A digital elevation model, with a 10 m resolution, was used together with landuse, geology and debris flow hazard initiation maps as inputs of the Flow-R model to restrict potential areas within each hazard initiation probability class to locations where debris flows are most likely to initiate. Afterwards, runout areas were calculated using multiple flow direction and energy based algorithms. Maximum probable runout zones were calibrated using documented past events and aerial photographs. Finally, two debris flow hazard maps were prepared. The first simply delimits five hazard zones, while the second incorporates the information about debris flow spreading direction probabilities, showing areas more likely to be affected by future debris flows. Limitations of the modelling arise mainly from the models applied and analysis scale, which are neglecting local controlling factors of debris flow hazard. The presented approach of debris flow hazard analysis, associating automatic detection of the source areas and a simple assessment of the debris flow spreading, provided results for consequent hazard and risk studies. However, for the validation and transferability of the parameters and results to other study areas, more testing is needed.
Resumo:
Every year, debris flows cause huge damage in mountainous areas. Due to population pressure in hazardous zones, the socio-economic impact is much higher than in the past. Therefore, the development of indicative susceptibility hazard maps is of primary importance, particularly in developing countries. However, the complexity of the phenomenon and the variability of local controlling factors limit the use of processbased models for a first assessment. A debris flow model has been developed for regional susceptibility assessments using digital elevation model (DEM) with a GIS-based approach.. The automatic identification of source areas and the estimation of debris flow spreading, based on GIS tools, provide a substantial basis for a preliminary susceptibility assessment at a regional scale. One of the main advantages of this model is its workability. In fact, everything is open to the user, from the data choice to the selection of the algorithms and their parameters. The Flow-R model was tested in three different contexts: two in Switzerland and one in Pakistan, for indicative susceptibility hazard mapping. It was shown that the quality of the DEM is the most important parameter to obtain reliable results for propagation, but also to identify the potential debris flows sources.
Resumo:
Objectives: Gentamicin is one of the most commonly prescribed antibiotics for suspected or proven infection in newborns. Because of age-associated (pre- and post- natal) changes in body composition and organ function, large interindividual variability in gentamicin drug levels exists, thus requiring a close monitoring of this drug due to its narrow therapeutic index. We aimed to investigate clinical and demographic factors influencing gentamicin pharmacokinetics (PK) in a large cohort of unselected newborns and to explore optimal regimen based on simulation. Methods: All gentamicin concentration data from newborns treated at the University Hospital Center of Lausanne between December 2006 and October 2011 were retrieved. Gentamicin concentrations were measured within the frame of a routine therapeutic drug monitoring program, in which 2 concentrations (at 1h and 12h) are systematically collected after the first administered dose, and a few additional concentrations are sampled along the treatment course. A population PK analysis was performed by comparing various structural models, and the effect of clinical and demographic factors on gentamicin disposition was explored using NONMEM®. Results: A total of 3039 concentrations collected in 994 preterm (median gestational age 32.3 weeks, range 24.2-36.5 weeks) and 455 term newborns were used in the analysis. Most of the data (86%) were sampled after the first dose (C1 h and C12 h). A two-compartment model best characterized gentamicin PK. Average clearance (CL) was 0.044 L/h/kg (CV 25%), central volume of distribution (Vc) 0.442 L/kg (CV 18%), intercompartmental clearance (Q) 0.040 L/h/kg and peripheral volume of distribution (Vp) 0.122 L/kg. Body weight, gestational age and postnatal age positively influenced CL. The use of both gestational age and postnatal age better predicted CL than postmenstrual age alone. CL was affected by dopamine and furosemide administration and non-significantly by indometacin. Body weight, gestational age and dopamine coadminstration significantly influenced Vc. Model based simulation confirms that preterm infants need higher dose, superior to 4 mg/kg, and extended interval dosage regimen to achieve adequate concentration. Conclusions: This study, performed on a very large cohort of neonates, identified important factors influencing gentamicin PK. The model will serve to elaborate a Bayesian tool for dosage individualization based on a single measurement.
Resumo:
Many animals that live in groups maintain competitive relationships, yet avoid continual fighting, by forming dominance hierarchies. We compare predictions of stochastic, individual-based models with empirical experimental evidence using shore crabs to test competing hypotheses regarding hierarchy development. The models test (1) what information individuals use when deciding to fight or retreat, (2) how past experience affects current resource-holding potential, and (3) how individuals deal with changes to the social environment. First, we conclude that crabs assess only their own state and not their opponent's when deciding to fight or retreat. Second, willingness to enter, and performance in, aggressive contests are influenced by previous contest outcomes. Winning increases the likelihood of both fighting and winning future interactions, while losing has the opposite effect. Third, when groups with established dominance hierarchies dissolve and new groups form, individuals reassess their ranks, showing no memory of previous rank or group affiliation. With every change in group composition, individuals fight for their new ranks. This iterative process carries over as groups dissolve and form, which has important implications for the relationship between ability and hierarchy rank. We conclude that dominance hierarchies emerge through an interaction of individual and social factors, and discuss these findings in terms of an underlying mechanism. Overall, our results are consistent with crabs using a cumulative assessment strategy iterated across changes in group composition, in which aggression is constrained by an absolute threshold in energy spent and damage received while fighting.
Resumo:
Adaptive dynamics shows that a continuous trait under frequency dependent selection may first converge to a singular point followed by spontaneous transition from a unimodal trait distribution into a bimodal one, which is called "evolutionary branching". Here, we study evolutionary branching in a deme-structured population by constructing a quantitative genetic model for the trait variance dynamics, which allows us to obtain an analytic condition for evolutionary branching. This is first shown to agree with previous conditions for branching expressed in terms of relatedness between interacting individuals within demes and obtained from mutant-resident systems. We then show this branching condition can be markedly simplified when the evolving trait affect fecundity and/or survival, as opposed to affecting population structure, which would occur in the case of the evolution of dispersal. As an application of our model, we evaluate the threshold migration rate below which evolutionary branching cannot occur in a pairwise interaction game. This agrees very well with the individual-based simulation results.
Resumo:
Empirical studies indicate that the transition to parenthood is influenced by an individual's peer group. To study the mechanisms creating interdepen- dencies across individuals' transition to parenthood and its timing we apply an agent-based simulation model. We build a one-sex model and provide agents with three different characteristics regarding age, intended education and parity. Agents endogenously form their network based on social closeness. Network members then may influence the agents' transition to higher parity levels. Our numerical simulations indicate that accounting for social inter- actions can explain the shift of first-birth probabilities in Austria over the period 1984 to 2004. Moreover, we apply our model to forecast age-specific fertility rates up to 2016.
Resumo:
Computational modeling has become a widely used tool for unraveling the mechanisms of higher level cooperative cell behavior during vascular morphogenesis. However, experimenting with published simulation models or adding new assumptions to those models can be daunting for novice and even for experienced computational scientists. Here, we present a step-by-step, practical tutorial for building cell-based simulations of vascular morphogenesis using the Tissue Simulation Toolkit (TST). The TST is a freely available, open-source C++ library for developing simulations with the two-dimensional cellular Potts model, a stochastic, agent-based framework to simulate collective cell behavior. We will show the basic use of the TST to simulate and experiment with published simulations of vascular network formation. Then, we will present step-by-step instructions and explanations for building a recent simulation model of tumor angiogenesis. Demonstrated mechanisms include cell-cell adhesion, chemotaxis, cell elongation, haptotaxis, and haptokinesis.
Resumo:
Objectives: Several population pharmacokinetic (PPK) and pharmacokinetic-pharmacodynamic (PK-PD) analyses have been performed with the anticancer drug imatinib. Inspired by the approach of meta-analysis, we aimed to compare and combine results from published studies in a useful way - in particular for improving the clinical interpretation of imatinib concentration measurements in the scope of therapeutic drug monitoring (TDM). Methods: Original PPK analyses and PK-PD studies (PK surrogate: trough concentration Cmin; PD outcomes: optimal early response and specific adverse events) were searched systematically on MEDLINE. From each identified PPK model, a predicted concentration distribution under standard dosage was derived through 1000 simulations (NONMEM), after standardizing model parameters to common covariates. A "reference range" was calculated from pooled simulated concentrations in a semi-quantitative approach (without specific weighting) over the whole dosing interval. Meta-regression summarized relationships between Cmin and optimal/suboptimal early treatment response. Results: 9 PPK models and 6 relevant PK-PD reports in CML patients were identified. Model-based predicted median Cmin ranged from 555 to 1388 ng/ml (grand median: 870 ng/ml and inter-quartile range: 520-1390 ng/ml). The probability to achieve optimal early response was predicted to increase from 60 to 85% from 520 to 1390 ng/ml across PK-PD studies (odds ratio for doubling Cmin: 2.7). Reporting of specific adverse events was too heterogeneous to perform a regression analysis. The general frequency of anemia, rash and fluid retention increased however consistently with Cmin, but less than response probability. Conclusions: Predicted drug exposure may differ substantially between various PPK analyses. In this review, heterogeneity was mainly attributed to 2 "outlying" models. The established reference range seems to cover the range where both good efficacy and acceptable tolerance are expected for most patients. TDM guided dose adjustment appears therefore justified for imatinib in CML patients. Its usefulness remains now to be prospectively validated in a randomized trial.
Resumo:
Toxicokinetic modeling is a useful tool to describe or predict the behavior of a chemical agent in the human or animal organism. A general model based on four compartments was developed in a previous study in order to quantify the effect of human variability on a wide range of biological exposure indicators. The aim of this study was to adapt this existing general toxicokinetic model to three organic solvents, which were methyl ethyl ketone, 1-methoxy-2-propanol and 1,1,1,-trichloroethane, and to take into account sex differences. We assessed in a previous human volunteer study the impact of sex on different biomarkers of exposure corresponding to the three organic solvents mentioned above. Results from that study suggested that not only physiological differences between men and women but also differences due to sex hormones levels could influence the toxicokinetics of the solvents. In fact the use of hormonal contraceptive had an effect on the urinary levels of several biomarkers, suggesting that exogenous sex hormones could influence CYP2E1 enzyme activity. These experimental data were used to calibrate the toxicokinetic models developed in this study. Our results showed that it was possible to use an existing general toxicokinetic model for other compounds. In fact, most of the simulation results showed good agreement with the experimental data obtained for the studied solvents, with a percentage of model predictions that lies within the 95% confidence interval varying from 44.4 to 90%. Results pointed out that for same exposure conditions, men and women can show important differences in urinary levels of biological indicators of exposure. Moreover, when running the models by simulating industrial working conditions, these differences could even be more pronounced. In conclusion, a general and simple toxicokinetic model, adapted for three well known organic solvents, allowed us to show that metabolic parameters can have an important impact on the urinary levels of the corresponding biomarkers. These observations give evidence of an interindividual variablity, an aspect that should have its place in the approaches for setting limits of occupational exposure.
Resumo:
Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending the corresponding approaches to the regional scale represents a major, and as-of-yet largely unresolved, challenge. To address this problem, we have developed a downscaling procedure based on a non-linear Bayesian sequential simulation approach. The basic objective of this algorithm is to estimate the value of the sparsely sampled hydraulic conductivity at non-sampled locations based on its relation to the electrical conductivity, which is available throughout the model space. The in situ relationship between the hydraulic and electrical conductivities is described through a non-parametric multivariate kernel density function. This method is then applied to the stochastic integration of low-resolution, re- gional-scale electrical resistivity tomography (ERT) data in combination with high-resolution, local-scale downhole measurements of the hydraulic and electrical conductivities. Finally, the overall viability of this downscaling approach is tested and verified by performing and comparing flow and transport simulation through the original and the downscaled hydraulic conductivity fields. Our results indicate that the proposed procedure does indeed allow for obtaining remarkably faithful estimates of the regional-scale hydraulic conductivity structure and correspondingly reliable predictions of the transport characteristics over relatively long distances.
Resumo:
Simulated-annealing-based conditional simulations provide a flexible means of quantitatively integrating diverse types of subsurface data. Although such techniques are being increasingly used in hydrocarbon reservoir characterization studies, their potential in environmental, engineering and hydrological investigations is still largely unexploited. Here, we introduce a novel simulated annealing (SA) algorithm geared towards the integration of high-resolution geophysical and hydrological data which, compared to more conventional approaches, provides significant advancements in the way that large-scale structural information in the geophysical data is accounted for. Model perturbations in the annealing procedure are made by drawing from a probability distribution for the target parameter conditioned to the geophysical data. This is the only place where geophysical information is utilized in our algorithm, which is in marked contrast to other approaches where model perturbations are made through the swapping of values in the simulation grid and agreement with soft data is enforced through a correlation coefficient constraint. Another major feature of our algorithm is the way in which available geostatistical information is utilized. Instead of constraining realizations to match a parametric target covariance model over a wide range of spatial lags, we constrain the realizations only at smaller lags where the available geophysical data cannot provide enough information. Thus we allow the larger-scale subsurface features resolved by the geophysical data to have much more due control on the output realizations. Further, since the only component of the SA objective function required in our approach is a covariance constraint at small lags, our method has improved convergence and computational efficiency over more traditional methods. Here, we present the results of applying our algorithm to the integration of porosity log and tomographic crosshole georadar data to generate stochastic realizations of the local-scale porosity structure. Our procedure is first tested on a synthetic data set, and then applied to data collected at the Boise Hydrogeophysical Research Site.