63 resultados para Stochastic simulation methods
em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain
Resumo:
Background: With increasing computer power, simulating the dynamics of complex systems in chemistry and biology is becoming increasingly routine. The modelling of individual reactions in (bio)chemical systems involves a large number of random events that can be simulated by the stochastic simulation algorithm (SSA). The key quantity is the step size, or waiting time, τ, whose value inversely depends on the size of the propensities of the different channel reactions and which needs to be re-evaluated after every firing event. Such a discrete event simulation may be extremely expensive, in particular for stiff systems where τ can be very short due to the fast kinetics of some of the channel reactions. Several alternative methods have been put forward to increase the integration step size. The so-called τ-leap approach takes a larger step size by allowing all the reactions to fire, from a Poisson or Binomial distribution, within that step. Although the expected value for the different species in the reactive system is maintained with respect to more precise methods, the variance at steady state can suffer from large errors as τ grows. Results: In this paper we extend Poisson τ-leap methods to a general class of Runge-Kutta (RK) τ-leap methods. We show that with the proper selection of the coefficients, the variance of the extended τ-leap can be well-behaved, leading to significantly larger step sizes.Conclusions: The benefit of adapting the extended method to the use of RK frameworks is clear in terms of speed of calculation, as the number of evaluations of the Poisson distribution is still one set per time step, as in the original τ-leap method. The approach paves the way to explore new multiscale methods to simulate (bio)chemical systems.
Resumo:
The self-intermediate dynamic structure factor Fs(k,t) of liquid lithium near the melting temperature is calculated by molecular dynamics. The results are compared with the predictions of several theoretical approaches, paying special attention to the Lovesey model and the Wahnstrm and Sjgren mode-coupling theory. To this end the results for the Fs(k,t) second memory function predicted by both models are compared with the ones calculated from the simulations.
Resumo:
We have studied how leaders emerge in a group as a consequence of interactions among its members. We propose that leaders can emerge as a consequence of a self-organized process based on local rules of dyadic interactions among individuals. Flocks are an example of self-organized behaviour in a group and properties similar to those observed in flocks might also explain some of the dynamics and organization of human groups. We developed an agent-based model that generated flocks in a virtual world and implemented it in a multi-agent simulation computer program that computed indices at each time step of the simulation to quantify the degree to which a group moved in a coordinated way (index of flocking behaviour) and the degree to which specific individuals led the group (index of hierarchical leadership). We ran several series of simulations in order to test our model and determine how these indices behaved under specific agent and world conditions. We identified the agent, world property, and model parameters that made stable, compact flocks emerge, and explored possible environmental properties that predicted the probability of becoming a leader.
Resumo:
A physical model for the simulation of x-ray emission spectra from samples irradiated with kilovolt electron beams is proposed. Inner shell ionization by electron impact is described by means of total cross sections evaluated from an optical-data model. A double differential cross section is proposed for bremsstrahlung emission, which reproduces the radiative stopping powers derived from the partial wave calculations of Kissel, Quarles and Pratt [At. Data Nucl. Data Tables 28, 381 (1983)]. These ionization and radiative cross sections have been introduced into a general-purpose Monte Carlo code, which performs simulation of coupled electron and photon transport for arbitrary materials. To improve the efficiency of the simulation, interaction forcing, a variance reduction technique, has been applied for both ionizing collisions and radiative events. The reliability of simulated x-ray spectra is analyzed by comparing simulation results with electron probe measurements.
Resumo:
This paper presents a methodology to determine the parameters used in the simulation of delamination in composite materials using decohesion finite elements. A closed-form expression is developed to define the stiffness of the cohesive layer. A novel procedure that allows the use of coarser meshes of decohesion elements in large-scale computations is proposed. The procedure ensures that the energy dissipated by the fracture process is correctly computed. It is shown that coarse-meshed models defined using the approach proposed here yield the same results as the models with finer meshes normally used in the simulation of fracture processes
Resumo:
A damage model for the simulation of delamination propagation under high-cycle fatigue loading is proposed. The basis for the formulation is a cohesive law that links fracture and damage mechanics to establish the evolution of the damage variable in terms of the crack growth rate dA/dN. The damage state is obtained as a function of the loading conditions as well as the experimentally-determined coefficients of the Paris Law crack propagation rates for the material. It is shown that by using the constitutive fatigue damage model in a structural analysis, experimental results can be reproduced without the need of additional model-specific curve-fitting parameters
Resumo:
A thermodynamically consistent damage model for the simulation of progressive delamination under variable mode ratio is presented. The model is formulated in the context of the Damage Mechanics. The constitutive equation that results from the definition of the free energy as a function of a damage variable is used to model the initiation and propagation of delamination. A new delamination initiation criterion is developed to assure that the formulation can account for changes in the loading mode in a thermodynamically consistent way. The formulation proposed accounts for crack closure effets avoiding interfacial penetration of two adjacent layers aftercomplete decohesion. The model is implemented in a finite element formulation. The numerical predictions given by the model are compared with experimental results
Resumo:
Traffic forecasts provide essential input for the appraisal of transport investment projects. However, according to recent empirical evidence, long-term predictions are subject to high levels of uncertainty. This paper quantifies uncertainty in traffic forecasts for the tolled motorway network in Spain. Uncertainty is quantified in the form of a confidence interval for the traffic forecast that includes both model uncertainty and input uncertainty. We apply a stochastic simulation process based on bootstrapping techniques. Furthermore, the paper proposes a new methodology to account for capacity constraints in long-term traffic forecasts. Specifically, we suggest a dynamic model in which the speed of adjustment is related to the ratio between the actual traffic flow and the maximum capacity of the motorway. This methodology is applied to a specific public policy that consists of suppressing the toll on a certain motorway section before the concession expires.
Resumo:
Due to the high cost of a large ATM network working up to full strength to apply our ideas about network management, i.e., dynamic virtual path (VP) management and fault restoration, we developed a distributed simulation platform for performing our experiments. This platform also had to be capable of other sorts of tests, such as connection admission control (CAC) algorithms, routing algorithms, and accounting and charging methods. The platform was posed as a very simple, event-oriented and scalable simulation. The main goal was the simulation of a working ATM backbone network with a potentially large number of nodes (hundreds). As research into control algorithms and low-level, or rather cell-level methods, was beyond the scope of this study, the simulation took place at a connection level, i.e., there was no real traffic of cells. The simulated network behaved like a real network accepting and rejecting SNMP ones, or experimental tools using the API node
Resumo:
Low concentrations of elements in geochemical analyses have the peculiarity of beingcompositional data and, for a given level of significance, are likely to be beyond thecapabilities of laboratories to distinguish between minute concentrations and completeabsence, thus preventing laboratories from reporting extremely low concentrations of theanalyte. Instead, what is reported is the detection limit, which is the minimumconcentration that conclusively differentiates between presence and absence of theelement. A spatially distributed exhaustive sample is employed in this study to generateunbiased sub-samples, which are further censored to observe the effect that differentdetection limits and sample sizes have on the inference of population distributionsstarting from geochemical analyses having specimens below detection limit (nondetects).The isometric logratio transformation is used to convert the compositional data in thesimplex to samples in real space, thus allowing the practitioner to properly borrow fromthe large source of statistical techniques valid only in real space. The bootstrap method isused to numerically investigate the reliability of inferring several distributionalparameters employing different forms of imputation for the censored data. The casestudy illustrates that, in general, best results are obtained when imputations are madeusing the distribution best fitting the readings above detection limit and exposes theproblems of other more widely used practices. When the sample is spatially correlated, itis necessary to combine the bootstrap with stochastic simulation
Resumo:
L’objectiu d’aquest treball és desenvolupar una metodologia per realitzar l’anàlisiparamètrica de l’assaig de compressió d’un panell de material compost rigiditzat ambtres nervis. En primer lloc és necessari desenvolupar un sistema automatitzat per generar i avaluar el conjunt de parametritzacions. A continuació, s’estudiaran quines variables d’estat són les més adequades per representar el vinclament local, la flexió global, la càrrega crítica de desestabilització i l’índex de fallada en l’anàlisi paramètrica. La modelització amb el mètode dels elements finits serveix per simular l’assaig a compressió del panell. La simulació es realitza mitjançant un càlcul no lineal, per estudiar la desestabilització i els fenòmens no lineals que pateix el panell. L’estudi es complementa amb una anàlisi modal i una anàlisi lineal
Resumo:
Estudi de l’eficiència aerodinàmica de les carrosseries de vehicles pesants de cara a reduir el consum de combustible en autocars de llarg trajecte. L’estudi es basa en tres aspectes: validació del programa de simulació, estudi aerodinàmic de diferents carrosseries d’autocar de mercat i estudi aerodinàmic de diferents complements
Resumo:
L’ objectiu del projecte és la implementació d’un simulador de sistema de recomanació que permeti estudiar algoritmes de dissociació entre agent-recomanador i usuari, combinant-los amb diverses tècniques de recomanació i fent servir infohabitants com Agents Recomanadors i veure com treballen en un sistema recomanador
Resumo:
Globalization involves several facility location problems that need to be handled at large scale. Location Allocation (LA) is a combinatorial problem in which the distance among points in the data space matter. Precisely, taking advantage of the distance property of the domain we exploit the capability of clustering techniques to partition the data space in order to convert an initial large LA problem into several simpler LA problems. Particularly, our motivation problem involves a huge geographical area that can be partitioned under overall conditions. We present different types of clustering techniques and then we perform a cluster analysis over our dataset in order to partition it. After that, we solve the LA problem applying simulated annealing algorithm to the clustered and non-clustered data in order to work out how profitable is the clustering and which of the presented methods is the most suitable