20 resultados para Numerical experiments
em BORIS: Bern Open Repository and Information System - Berna - Suiça
Resumo:
Previous studies of the sediments of Lake Lucerne have shown that massive subaqueous mass movements affecting unconsolidated sediments on lateral slopes are a common process in this lake, and, in view of historical reports describing damaging waves on the lake, it was suggested that tsunamis generated by mass movements represent a considerable natural hazard on the lakeshores. Newly performed numerical simulations combining two-dimensional, depth-averaged models for mass-movement propagation and for tsunami generation, propagation and inunda- tion reproduce a number of reported tsunami effects. Four analysed mass-movement scenarios—three based on documented slope failures involving volumes of 5.5 to 20.8 9 106 m3—show peak wave heights of several metres and maximum runup of 6 to [10 m in the directly affected basins, while effects in neighbouring basins are less drastic. The tsunamis cause large-scale inundation over distances of several hundred metres on flat alluvial plains close to the mass-movement source areas. Basins at the ends of the lake experience regular water-level oscillations with characteristic periods of several minutes. The vulnerability of potentially affected areas has increased dramatically since the times of the damaging historical events, recommending a thorough evaluation of the hazard.
Resumo:
Numerical simulation experiments give insight into the evolving energy partitioning during high-strain torsion experiments of calcite. Our numerical experiments are designed to derive a generic macroscopic grain size sensitive flow law capable of describing the full evolution from the transient regime to steady state. The transient regime is crucial for understanding the importance of micro structural processes that may lead to strain localization phenomena in deforming materials. This is particularly important in geological and geodynamic applications where the phenomenon of strain localization happens outside the time frame that can be observed under controlled laboratory conditions. Ourmethod is based on an extension of the paleowattmeter approach to the transient regime. We add an empirical hardening law using the Ramberg-Osgood approximation and assess the experiments by an evolution test function of stored over dissipated energy (lambda factor). Parameter studies of, strain hardening, dislocation creep parameter, strain rates, temperature, and lambda factor as well asmesh sensitivity are presented to explore the sensitivity of the newly derived transient/steady state flow law. Our analysis can be seen as one of the first steps in a hybrid computational-laboratory-field modeling workflow. The analysis could be improved through independent verifications by thermographic analysis in physical laboratory experiments to independently assess lambda factor evolution under laboratory conditions.
Resumo:
In the context of expensive numerical experiments, a promising solution for alleviating the computational costs consists of using partially converged simulations instead of exact solutions. The gain in computational time is at the price of precision in the response. This work addresses the issue of fitting a Gaussian process model to partially converged simulation data for further use in prediction. The main challenge consists of the adequate approximation of the error due to partial convergence, which is correlated in both design variables and time directions. Here, we propose fitting a Gaussian process in the joint space of design parameters and computational time. The model is constructed by building a nonstationary covariance kernel that reflects accurately the actual structure of the error. Practical solutions are proposed for solving parameter estimation issues associated with the proposed model. The method is applied to a computational fluid dynamics test case and shows significant improvement in prediction compared to a classical kriging model.
Resumo:
In this article, we develop the a priori and a posteriori error analysis of hp-version interior penalty discontinuous Galerkin finite element methods for strongly monotone quasi-Newtonian fluid flows in a bounded Lipschitz domain Ω ⊂ ℝd, d = 2, 3. In the latter case, computable upper and lower bounds on the error are derived in terms of a natural energy norm, which are explicit in the local mesh size and local polynomial degree of the approximating finite element method. A series of numerical experiments illustrate the performance of the proposed a posteriori error indicators within an automatic hp-adaptive refinement algorithm.
Resumo:
Stepwise uncertainty reduction (SUR) strategies aim at constructing a sequence of points for evaluating a function f in such a way that the residual uncertainty about a quantity of interest progressively decreases to zero. Using such strategies in the framework of Gaussian process modeling has been shown to be efficient for estimating the volume of excursion of f above a fixed threshold. However, SUR strategies remain cumbersome to use in practice because of their high computational complexity, and the fact that they deliver a single point at each iteration. In this article we introduce several multipoint sampling criteria, allowing the selection of batches of points at which f can be evaluated in parallel. Such criteria are of particular interest when f is costly to evaluate and several CPUs are simultaneously available. We also manage to drastically reduce the computational cost of these strategies through the use of closed form formulas. We illustrate their performances in various numerical experiments, including a nuclear safety test case. Basic notions about kriging, auxiliary problems, complexity calculations, R code, and data are available online as supplementary materials.
Resumo:
This bipartite comparative study aims at inspecting the similarities and differences between the Jones and Stokes–Mueller formalisms when modeling polarized light propagation with numerical simulations of the Monte Carlo type. In this first part, we review the theoretical concepts that concern light propagation and detection with both pure and partially/totally unpolarized states. The latter case involving fluctuations, or “depolarizing effects,” is of special interest here: Jones and Stokes–Mueller are equally apt to model such effects and are expected to yield identical results. In a second, ensuing paper, empirical evidence is provided by means of numerical experiments, using both formalisms.
Resumo:
Multi-objective optimization algorithms aim at finding Pareto-optimal solutions. Recovering Pareto fronts or Pareto sets from a limited number of function evaluations are challenging problems. A popular approach in the case of expensive-to-evaluate functions is to appeal to metamodels. Kriging has been shown efficient as a base for sequential multi-objective optimization, notably through infill sampling criteria balancing exploitation and exploration such as the Expected Hypervolume Improvement. Here we consider Kriging metamodels not only for selecting new points, but as a tool for estimating the whole Pareto front and quantifying how much uncertainty remains on it at any stage of Kriging-based multi-objective optimization algorithms. Our approach relies on the Gaussian process interpretation of Kriging, and bases upon conditional simulations. Using concepts from random set theory, we propose to adapt the Vorob’ev expectation and deviation to capture the variability of the set of non-dominated points. Numerical experiments illustrate the potential of the proposed workflow, and it is shown on examples how Gaussian process simulations and the estimated Vorob’ev deviation can be used to monitor the ability of Kriging-based multi-objective optimization algorithms to accurately learn the Pareto front.
Resumo:
The aim of this paper is to present a new class of smoothness testing strategies in the context of hp-adaptive refinements based on continuous Sobolev embeddings. In addition to deriving a modified form of the 1d smoothness indicators introduced in [26], they will be extended and applied to a higher dimensional framework. A few numerical experiments in the context of the hp-adaptive FEM for a linear elliptic PDE will be performed.
Resumo:
In this paper we develop an adaptive procedure for the numerical solution of general, semilinear elliptic problems with possible singular perturbations. Our approach combines both prediction-type adaptive Newton methods and a linear adaptive finite element discretization (based on a robust a posteriori error analysis), thereby leading to a fully adaptive Newton–Galerkin scheme. Numerical experiments underline the robustness and reliability of the proposed approach for various examples
Resumo:
Gaussian random field (GRF) conditional simulation is a key ingredient in many spatial statistics problems for computing Monte-Carlo estimators and quantifying uncertainties on non-linear functionals of GRFs conditional on data. Conditional simulations are known to often be computer intensive, especially when appealing to matrix decomposition approaches with a large number of simulation points. This work studies settings where conditioning observations are assimilated batch sequentially, with one point or a batch of points at each stage. Assuming that conditional simulations have been performed at a previous stage, the goal is to take advantage of already available sample paths and by-products to produce updated conditional simulations at mini- mal cost. Explicit formulae are provided, which allow updating an ensemble of sample paths conditioned on n ≥ 0 observations to an ensemble conditioned on n + q observations, for arbitrary q ≥ 1. Compared to direct approaches, the proposed formulae proveto substantially reduce computational complexity. Moreover, these formulae explicitly exhibit how the q new observations are updating the old sample paths. Detailed complexity calculations highlighting the benefits of this approach with respect to state-of-the-art algorithms are provided and are complemented by numerical experiments.
Numerical simulations of impacts involving porous bodies: II. Comparison with laboratory experiments
Resumo:
The self-regeneration capacity of articular cartilage is limited, due to its avascular and aneural nature. Loaded explants and cell cultures demonstrated that chondrocyte metabolism can be regulated via physiologic loading. However, the explicit ranges of mechanical stimuli that correspond to favourable metabolic response associated with extracellular matrix (ECM) synthesis are elusive. Unsystematic protocols lacking this knowledge produce inconsistent results. This study aims to determine the intrinsic ranges of physical stimuli that increase ECM synthesis and simultaneously inhibit nitric oxide (NO) production in chondrocyte-agarose constructs, by numerically re-evaluating the experiments performed by Tsuang et al. (2008). Twelve loading patterns were simulated with poro-elastic finite element models in ABAQUS. Pressure on solid matrix, von Mises stress, maximum principle stress and pore pressure were selected as intrinsic mechanical stimuli. Their development rates and magnitudes at the steady state of cyclic loading were calculated with MATLAB at the construct level. Concurrent increase in glycosaminoglycan and collagen was observed at 2300 Pa pressure and 40 Pa/s pressure rate. Between 0-1500 Pa and 0-40 Pa/s, NO production was consistently positive with respect to controls, whereas ECM synthesis was negative in the same range. A linear correlation was found between pressure rate and NO production (R = 0.77). Stress states identified in this study are generic and could be used to develop predictive algorithms for matrix production in agarose-chondrocyte constructs of arbitrary shape, size and agarose concentration. They could also be helpful to increase the efficacy of loading protocols for avascular tissue engineering. Copyright (c) 2010 John Wiley \& Sons, Ltd.
Resumo:
OBJECTIVE: In a prospective study we investigated whether numerical and functional changes of CD4+CD25(high) regulatory T cells (Treg) were associated with changes of disease activity observed during pregnancy and post partum in patients with rheumatoid arthritis (RA). METHODS: The frequency of CD4+CD25(high) T cells was determined by flow cytometry in 12 patients with RA and 14 healthy women during and after pregnancy. Fluorescence-activated cell sorting (FACS) was used to sort CD4+CD25(high) T cells and CD4+CD25- T cells were stimulated with anti-CD3 and anti-CD28 monoclonal antibodies alone or in co-culture to investigate proliferation and cytokine secretion. RESULTS: Frequencies of CD4+CD25(high) Treg were significantly higher in the third trimester compared to 8 weeks post partum in patients and controls. Numbers of CD4+CD25(high) Treg inversely correlated with disease activity in the third trimester and post partum. In co-culture experiments significantly higher amounts of IL10 and lowered levels of tumour necrosis factor (TNF)alpha and interferon (IFN)gamma were found in supernatants of the third trimester compared to postpartum samples. These findings were independent from health or disease in pregnancy, however postpartum TNFalpha and IFN gamma levels were higher in patients with disease flares. CONCLUSION: The amelioration of disease activity in the third trimester corresponded to the increased number of Treg that induced a pronounced anti-inflammatory cytokine milieu. The pregnancy related quantitative and qualitative changes of Treg suggest a beneficial effect of Treg on disease activity.