941 resultados para Simulation-models
Resumo:
A Monte Carlo simulation method is Used 10 study the effects of adsorption strength and topology of sites on adsorption of simple Lennard-Jones fluids in a carbon slit pore of finite length. Argon is used as a model adsorbate, while the adsorbent is modeled as a finite carbon slit pore whose two walls composed of three graphene layers with carbon atoms arranged in a hexagonal pattern. Impurities having well depth of interaction greater than that of carbon atom are assumed to be grafted onto the surface. Different topologies of the impurities; corner, centre, shelf and random topologies are studied. Adsorption isotherms of argon at 87.3 K are obtained for pore having widths of 1, 1.5 and 3 11111 using a Grand Canonical Monte Carlo simulation (GCMC). These results are compared with isotherms obtained for infinite pores. It is shown that the Surface heterogeneity affects significantly the overall adsorption isotherm, particularly the phase transition. Basically it shifts the onset of adsorption to lower pressure and the adsorption isotherms for these four impurity models are generally greater than that for finite pore. The positions of impurities on solid Surface also affect the shape of the adsorption isotherm and the phase transition. We have found that the impurities allocated at the centre of pore walls provide the greatest isotherm at low pressures. However when the pressure increases the impurities allocated along the edges of the graphene layers show the most significant effect on the adsorption isotherm. We have investigated the effect of surface heterogeneity on adsorption hysteresis loops of three models of impurity topology, it shows that the adsorption branches of these isotherms are different, while the desorption branches are quite close to each other. This suggests that the desorption branch is either the thermodynamic equilibrium branch or closer to it than the adsorption branch. (c) 2005 Elsevier Inc. All rights reserved.
Resumo:
Numerical simulations of turbulent driven flow in a dense medium cyclone with magnetite medium have been conducted using Fluent. The predicted air core shape and diameter were found to be close to the experimental results measured by gamma ray tomography. It is possible that the Large eddy simulation (LES) turbulence model with Mixture multi-phase model can be used to predict the air/slurry interface accurately although the LES may need a finer grid. Multi-phase simulations (air/water/medium) are showing appropriate medium segregation effects but are over-predicting the level of segregation compared to that measured by gamma-ray tomography in particular with over prediction of medium concentrations near the wall. Further, investigated the accurate prediction of axial segregation of magnetite using the LES turbulence model together with the multi-phase mixture model and viscosity corrections according to the feed particle loading factor. Addition of lift forces and viscosity correction improved the predictions especially near the wall. Predicted density profiles are very close to gamma ray tomography data showing a clear density drop near the wall. The effect of size distribution of the magnetite has been fully studied. It is interesting to note that the ultra-fine magnetite sizes (i.e. 2 and 7 mu m) are distributed uniformly throughout the cyclone. As the size of magnetite increases, more segregation of magnetite occurs close to the wall. The cut-density (d(50)) of the magnetite segregation is 32 gm, which is expected with superfine magnetite feed size distribution. At higher feed densities the agreement between the [Dungilson, 1999; Wood, J.C., 1990. A performance model for coal-washing dense medium cyclones, Ph.D. Thesis, JKMRC, University of Queensland] correlations and the CFD are reasonably good, but the overflow density is lower than the model predictions. It is believed that the excessive underflow volumetric flow rates are responsible for under prediction of the overflow density. (c) 2006 Elsevier Ltd. All rights reserved.
Resumo:
We show how to efficiently simulate a quantum many-body system with tree structure when its entanglement (Schmidt number) is small for any bipartite split along an edge of the tree. As an application, we show that any one-way quantum computation on a tree graph can be efficiently simulated with a classical computer.
Resumo:
Fuzzy data has grown to be an important factor in data mining. Whenever uncertainty exists, simulation can be used as a model. Simulation is very flexible, although it can involve significant levels of computation. This article discusses fuzzy decision-making using the grey related analysis method. Fuzzy models are expected to better reflect decision-making uncertainty, at some cost in accuracy relative to crisp models. Monte Carlo simulation is used to incorporate experimental levels of uncertainty into the data and to measure the impact of fuzzy decision tree models using categorical data. Results are compared with decision tree models based on crisp continuous data.
Resumo:
Standard factorial designs sometimes may be inadequate for experiments that aim to estimate a generalized linear model, for example, for describing a binary response in terms of several variables. A method is proposed for finding exact designs for such experiments that uses a criterion allowing for uncertainty in the link function, the linear predictor, or the model parameters, together with a design search. Designs are assessed and compared by simulation of the distribution of efficiencies relative to locally optimal designs over a space of possible models. Exact designs are investigated for two applications, and their advantages over factorial and central composite designs are demonstrated.
Resumo:
An appreciation of the physical mechanisms which cause observed seismicity complexity is fundamental to the understanding of the temporal behaviour of faults and single slip events. Numerical simulation of fault slip can provide insights into fault processes by allowing exploration of parameter spaces which influence microscopic and macroscopic physics of processes which may lead towards an answer to those questions. Particle-based models such as the Lattice Solid Model have been used previously for the simulation of stick-slip dynamics of faults, although mainly in two dimensions. Recent increases in the power of computers and the ability to use the power of parallel computer systems have made it possible to extend particle-based fault simulations to three dimensions. In this paper a particle-based numerical model of a rough planar fault embedded between two elastic blocks in three dimensions is presented. A very simple friction law without any rate dependency and no spatial heterogeneity in the intrinsic coefficient of friction is used in the model. To simulate earthquake dynamics the model is sheared in a direction parallel to the fault plane with a constant velocity at the driving edges. Spontaneous slip occurs on the fault when the shear stress is large enough to overcome the frictional forces on the fault. Slip events with a wide range of event sizes are observed. Investigation of the temporal evolution and spatial distribution of slip during each event shows a high degree of variability between the events. In some of the larger events highly complex slip patterns are observed.
Resumo:
Discrete stochastic simulations are a powerful tool for understanding the dynamics of chemical kinetics when there are small-to-moderate numbers of certain molecular species. In this paper we introduce delays into the stochastic simulation algorithm, thus mimicking delays associated with transcription and translation. We then show that this process may well explain more faithfully than continuous deterministic models the observed sustained oscillations in expression levels of hes1 mRNA and Hes1 protein.
Resumo:
Background: The structure of proteins may change as a result of the inherent flexibility of some protein regions. We develop and explore probabilistic machine learning methods for predicting a continuum secondary structure, i.e. assigning probabilities to the conformational states of a residue. We train our methods using data derived from high-quality NMR models. Results: Several probabilistic models not only successfully estimate the continuum secondary structure, but also provide a categorical output on par with models directly trained on categorical data. Importantly, models trained on the continuum secondary structure are also better than their categorical counterparts at identifying the conformational state for structurally ambivalent residues. Conclusion: Cascaded probabilistic neural networks trained on the continuum secondary structure exhibit better accuracy in structurally ambivalent regions of proteins, while sustaining an overall classification accuracy on par with standard, categorical prediction methods.
Resumo:
Brugada syndrome (BS) is a genetic disease identified by an abnormal electrocardiogram ( ECG) ( mainly abnormal ECGs associated with right bundle branch block and ST-elevation in right precordial leads). BS can lead to increased risk of sudden cardiac death. Experimental studies on human ventricular myocardium with BS have been limited due to difficulties in obtaining data. Thus, the use of computer simulation is an important alternative. Most previous BS simulations were based on animal heart cell models. However, due to species differences, the use of human heart cell models, especially a model with three-dimensional whole-heart anatomical structure, is needed. In this study, we developed a model of the human ventricular action potential (AP) based on refining the ten Tusscher et al (2004 Am. J. Physiol. Heart Circ. Physiol. 286 H1573 - 89) model to incorporate newly available experimental data of some major ionic currents of human ventricular myocytes. These modified channels include the L-type calcium current (ICaL), fast sodium current (I-Na), transient outward potassium current (I-to), rapidly and slowly delayed rectifier potassium currents (I-Kr and I-Ks) and inward rectifier potassium current (I-Ki). Transmural heterogeneity of APs for epicardial, endocardial and mid-myocardial (M) cells was simulated by varying the maximum conductance of IKs and Ito. The modified AP models were then used to simulate the effects of BS on cellular AP and body surface potentials using a three-dimensional dynamic heart - torso model. Our main findings are as follows. (1) BS has little effect on the AP of endocardial or mid-myocardial cells, but has a large impact on the AP of epicardial cells. (2) A likely region of BS with abnormal cell AP is near the right ventricular outflow track, and the resulting ST-segment elevation is located in the median precordium area. These simulation results are consistent with experimental findings reported in the literature. The model can reproduce a variety of electrophysiological behaviors and provides a good basis for understanding the genesis of abnormal ECG under the condition of BS disease.
Resumo:
Bistability arises within a wide range of biological systems from the A phage switch in bacteria to cellular signal transduction pathways in mammalian cells. Changes in regulatory mechanisms may result in genetic switching in a bistable system. Recently, more and more experimental evidence in the form of bimodal population distributions indicates that noise plays a very important role in the switching of bistable systems. Although deterministic models have been used for studying the existence of bistability properties under various system conditions, these models cannot realize cell-to-cell fluctuations in genetic switching. However, there is a lag in the development of stochastic models for studying the impact of noise in bistable systems because of the lack of detailed knowledge of biochemical reactions, kinetic rates, and molecular numbers. in this work, we develop a previously undescribed general technique for developing quantitative stochastic models for large-scale genetic regulatory networks by introducing Poisson random variables into deterministic models described by ordinary differential equations. Two stochastic models have been proposed for the genetic toggle switch interfaced with either the SOS signaling pathway or a quorum-sensing signaling pathway, and we have successfully realized experimental results showing bimodal population distributions. Because the introduced stochastic models are based on widely used ordinary differential equation models, the success of this work suggests that this approach is a very promising one for studying noise in large-scale genetic regulatory networks.
Resumo:
Current Physiologically based pharmacokinetic (PBPK) models are inductive. We present an additional, different approach that is based on the synthetic rather than the inductive approach to modeling and simulation. It relies on object-oriented programming A model of the referent system in its experimental context is synthesized by assembling objects that represent components such as molecules, cells, aspects of tissue architecture, catheters, etc. The single pass perfused rat liver has been well described in evaluating hepatic drug pharmacokinetics (PK) and is the system on which we focus. In silico experiments begin with administration of objects representing actual compounds. Data are collected in a manner analogous to that in the referent PK experiments. The synthetic modeling method allows for recognition and representation of discrete event and discrete time processes, as well as heterogeneity in organization, function, and spatial effects. An application is developed for sucrose and antipyrine, administered separately and together PBPK modeling has made extensive progress in characterizing abstracted PK properties but this has also been its limitation. Now, other important questions and possible extensions emerge. How are these PK properties and the observed behaviors generated? The inherent heuristic limitations of traditional models have hindered getting meaningful, detailed answers to such questions. Synthetic models of the type described here are specifically intended to help answer such questions. Analogous to wet-lab experimental models, they retain their applicability even when broken apart into sub-components. Having and applying this new class of models along with traditional PK modeling methods is expected to increase the productivity of pharmaceutical research at all levels that make use of modeling and simulation.
Resumo:
The recurrence interval statistics for regional seismicity follows a universal distribution function, independent of the tectonic setting or average rate of activity (Corral, 2004). The universal function is a modified gamma distribution with power-law scaling of recurrence intervals shorter than the average rate of activity and exponential decay for larger intervals. We employ the method of Corral (2004) to examine the recurrence statistics of a range of cellular automaton earthquake models. The majority of models has an exponential distribution of recurrence intervals, the same as that of a Poisson process. One model, the Olami-Feder-Christensen automaton, has recurrence statistics consistent with regional seismicity for a certain range of the conservation parameter of that model. For conservation parameters in this range, the event size statistics are also consistent with regional seismicity. Models whose dynamics are dominated by characteristic earthquakes do not appear to display universality of recurrence statistics.
Resumo:
Despite the insight gained from 2-D particle models, and given that the dynamics of crustal faults occur in 3-D space, the question remains, how do the 3-D fault gouge dynamics differ from those in 2-D? Traditionally, 2-D modeling has been preferred over 3-D simulations because of the computational cost of solving 3-D problems. However, modern high performance computing architectures, combined with a parallel implementation of the Lattice Solid Model (LSM), provide the opportunity to explore 3-D fault micro-mechanics and to advance understanding of effective constitutive relations of fault gouge layers. In this paper, macroscopic friction values from 2-D and 3-D LSM simulations, performed on an SGI Altix 3700 super-cluster, are compared. Two rectangular elastic blocks of bonded particles, with a rough fault plane and separated by a region of randomly sized non-bonded gouge particles, are sheared in opposite directions by normally-loaded driving plates. The results demonstrate that the gouge particles in the 3-D models undergo significant out-of-plane motion during shear. The 3-D models also exhibit a higher mean macroscopic friction than the 2-D models for varying values of interparticle friction. 2-D LSM gouge models have previously been shown to exhibit accelerating energy release in simulated earthquake cycles, supporting the Critical Point hypothesis. The 3-D models are shown to also display accelerating energy release, and good fits of power law time-to-failure functions to the cumulative energy release are obtained.
Resumo:
Turbulent flow around a rotating circular cylinder has numerous applications including wall shear stress and mass-transfer measurement related to the corrosion studies. It is also of interest in the context of flow over convex surfaces where standard turbulence models perform poorly. The main purpose of this paper is to elucidate the basic turbulence mechanism around a rotating cylinder at low Reynolds numbers to provide a better understanding of flow fundamentals. Direct numerical simulation (DNS) has been performed in a reference frame rotating at constant angular velocity with the cylinder. The governing equations are discretized by using a finite-volume method. As for fully developed channel, pipe, and boundary layer flows, a laminar sublayer, buffer layer, and logarithmic outer region were observed. The level of mean velocity is lower in the buffer and outer regions but the logarithmic region still has a slope equal to the inverse of the von Karman constant. Instantaneous flow visualization revealed that the turbulence length scale typically decreases as the Reynolds number increases. Wavelet analysis provided some insight into the dependence of structural characteristics on wave number. The budget of the turbulent kinetic energy was computed and found to be similar to that in plane channel flow as well as in pipe and zero pressure gradient boundary layer flows. Coriolis effects show as an equivalent production for the azimuthal and radial velocity fluctuations leading to their ratio being lowered relative to similar nonrotating boundary layer flows.
Resumo:
The Operator Choice Model (OCM) was developed to model the behaviour of operators attending to complex tasks involving interdependent concurrent activities, such as in Air Traffic Control (ATC). The purpose of the OCM is to provide a flexible framework for modelling and simulation that can be used for quantitative analyses in human reliability assessment, comparison between human computer interaction (HCI) designs, and analysis of operator workload. The OCM virtual operator is essentially a cycle of four processes: Scan Classify Decide Action Perform Action. Once a cycle is complete, the operator will return to the Scan process. It is also possible to truncate a cycle and return to Scan after each of the processes. These processes are described using Continuous Time Probabilistic Automata (CTPA). The details of the probability and timing models are specific to the domain of application, and need to be specified using domain experts. We are building an application of the OCM for use in ATC. In order to develop a realistic model we are calibrating the probability and timing models that comprise each process using experimental data from a series of experiments conducted with student subjects. These experiments have identified the factors that influence perception and decision making in simplified conflict detection and resolution tasks. This paper presents an application of the OCM approach to a simple ATC conflict detection experiment. The aim is to calibrate the OCM so that its behaviour resembles that of the experimental subjects when it is challenged with the same task. Its behaviour should also interpolate when challenged with scenarios similar to those used to calibrate it. The approach illustrated here uses logistic regression to model the classifications made by the subjects. This model is fitted to the calibration data, and provides an extrapolation to classifications in scenarios outside of the calibration data. A simple strategy is used to calibrate the timing component of the model, and the results for reaction times are compared between the OCM and the student subjects. While this approach to timing does not capture the full complexity of the reaction time distribution seen in the data from the student subjects, the mean and the tail of the distributions are similar.