921 resultados para Computational Simulation
Resumo:
Computational models in physiology often integrate functional and structural information from a large range of spatio-temporal scales from the ionic to the whole organ level. Their sophistication raises both expectations and scepticism concerning how computational methods can improve our understanding of living organisms and also how they can reduce, replace and refine animal experiments. A fundamental requirement to fulfil these expectations and achieve the full potential of computational physiology is a clear understanding of what models represent and how they can be validated. The present study aims at informing strategies for validation by elucidating the complex interrelations between experiments, models and simulations in cardiac electrophysiology. We describe the processes, data and knowledge involved in the construction of whole ventricular multiscale models of cardiac electrophysiology. Our analysis reveals that models, simulations, and experiments are intertwined, in an assemblage that is a system itself, namely the model-simulation-experiment (MSE) system. Validation must therefore take into account the complex interplay between models, simulations and experiments. Key points for developing strategies for validation are: 1) understanding sources of bio-variability is crucial to the comparison between simulation and experimental results; 2) robustness of techniques and tools is a pre-requisite to conducting physiological investigations using the MSE system; 3) definition and adoption of standards facilitates interoperability of experiments, models and simulations; 4) physiological validation must be understood as an iterative process that defines the specific aspects of electrophysiology the MSE system targets, and is driven by advancements in experimental and computational methods and the combination of both.
Resumo:
Increased focus on energy cost savings and carbon footprint reduction efforts improved the visibility of building energy simulation, which became a mandatory requirement of several building rating systems. Despite developments in building energy simulation algorithms and user interfaces, there are some major challenges associated with building energy simulation; an important one is the computational demands and processing time. In this paper, we analyze the opportunities and challenges associated with this topic while executing a set of 275 parametric energy models simultaneously in EnergyPlus using a High Performance Computing (HPC) cluster. Successful parallel computing implementation of building energy simulations will not only improve the time necessary to get the results and enable scenario development for different design considerations, but also might enable Dynamic-Building Information Modeling (BIM) integration and near real-time decision-making. This paper concludes with the discussions on future directions and opportunities associated with building energy modeling simulations.
Resumo:
Nonlinear time-fractional diffusion equations have been used to describe the liquid infiltration for both subdiffusion and superdiffusion in porous media. In this paper, some problems of anomalous infiltration with a variable-order timefractional derivative in porous media are considered. The time-fractional Boussinesq equation is also considered. Two computationally efficient implicit numerical schemes for the diffusion and wave-diffusion equations are proposed. Numerical examples are provided to show that the numerical methods are computationally efficient.
Resumo:
In this paper, we introduce the Stochastic Adams-Bashforth (SAB) and Stochastic Adams-Moulton (SAM) methods as an extension of the tau-leaping framework to past information. Using the theta-trapezoidal tau-leap method of weak order two as a starting procedure, we show that the k-step SAB method with k >= 3 is order three in the mean and correlation, while a predictor-corrector implementation of the SAM method is weak order three in the mean but only order one in the correlation. These convergence results have been derived analytically for linear problems and successfully tested numerically for both linear and non-linear systems. A series of additional examples have been implemented in order to demonstrate the efficacy of this approach.
Resumo:
Nondeclarative memory and novelty processing in the brain is an actively studied field of neuroscience, and reducing neural activity with repetition of a stimulus (repetition suppression) is a commonly observed phenomenon. Recent findings of an opposite trend specifically, rising activity for unfamiliar stimuli—question the generality of repetition suppression and stir debate over the underlying neural mechanisms. This letter introduces a theory and computational model that extend existing theories and suggests that both trends are, in principle, the rising and falling parts of an inverted U-shaped dependence of activity with respect to stimulus novelty that may naturally emerge in a neural network with Hebbian learning and lateral inhibition. We further demonstrate that the proposed model is sufficient for the simulation of dissociable forms of repetition priming using real-world stimuli. The results of our simulation also suggest that the novelty of stimuli used in neuroscientific research must be assessed in a particularly cautious way. The potential importance of the inverted-U in stimulus processing and its relationship to the acquisition of knowledge and competencies in humans is also discussed
Resumo:
The fate of two popular antibiotics, oxytetracycline and oxolinic acid, in a fish pond were simulated using a computational model. The VDC model, which is designed based on a model for predicting pesticide fate and transport in paddy fields, was modified to take into account the differences between the pond and the paddies as well as those between the fish and the rice plant behaviors. The pond conditions were set following the typical practice in South East Asia aquaculture. The two antibiotics were administered to the animal in the pond through medicated feed during a period of 5 days as in actual practice. Concentrations of oxytetracycline in pond water were higher than those of oxolinic acid at the beginning of the simulation. Dissipation rate of oxytetracycline is also higher as it is more readily available for degradation in the water. For the long term, oxolinic acid was present at higher concentration than oxytetracycline in pond water as well as pond sediment. The simulated results were expected to be conservative and can be useful for the lower tier assessment of exposure risk of veterinary medicine in aquaculture industry but more data are needed for the complete validation of the model.
Resumo:
RNase S is a complex consisting of two proteolytic fragments of RNase A: the S peptide (residues 1-20) and S protein (residues 21-124). RNase S and RNase A have very similar X-ray structures and enzymatic activities. previous experiments have shown increased rates of hydrogen exchange and greater sensitivity to tryptic cleavage for RNase S relative to RNase A. It has therefore been asserted that the RNase S complex is considerably more dynamically flexible than RNase A. In the present study we examine the differences in the dynamics of RNaseS and RNase A computationally, by MD simulations, and experimentally, using trypsin cleavage as a probe of dynamics. The fluctuations around the average solution structure during the simulation were analyzed by measuring the RMS deviation in coordinates. No significant differences between RNase S and RNase A dynamics were observed in the simulations. We were able to account for the apparent discrepancy between simulation and experiment by a simple model, According to this model, the experimentally observed differences in dynamics can be quantitatively explained by the small amounts of free S peptide and S protein that are present in equilibrium with the RNase S complex. Thus, folded RNase A and the RNase S complex have identical dynamic behavior, despite the presence of a break in polypeptide chain between residues 20 and 21 in the latter molecule. This is in contrast to what has been widely believed for over 30 years about this important fragment complementation system.
Resumo:
We propose a dynamic mathematical model of tissue oxygen transport by a preexisting three-dimensional microvascular network which provides nutrients for an in situ cancer at the very early stage of primary microtumour growth. The expanding tumour consumes oxygen during its invasion to the surrounding tissues and cooption of host vessels. The preexisting vessel cooption, remodelling and collapse are modelled by the changes of haemodynamic conditions due to the growing tumour. A detailed computational model of oxygen transport in tumour tissue is developed by considering (a) the time-varying oxygen advection diffusion equation within the microvessel segments, (b) the oxygen flux across the vessel walls, and (c) the oxygen diffusion and consumption with in the tumour and surrounding healthy tissue. The results show the oxygen concentration distribution at different time points of early tumour growth. In addition, the influence of preexisting vessel density on the oxygen transport has been discussed. The proposed model not only provides a quantitative approach for investigating the interactions between tumour growth and oxygen delivery, but also is extendable to model other molecules or chemotherapeutic drug transport in the future study.
Resumo:
A numerical scheme is presented for accurate simulation of fluid flow using the lattice Boltzmann equation (LBE) on unstructured mesh. A finite volume approach is adopted to discretize the LBE on a cell-centered, arbitrary shaped, triangular tessellation. The formulation includes a formal, second order discretization using a Total Variation Diminishing (TVD) scheme for the terms representing advection of the distribution function in physical space, due to microscopic particle motion. The advantage of the LBE approach is exploited by implementing the scheme in a new computer code to run on a parallel computing system. Performance of the new formulation is systematically investigated by simulating four benchmark flows of increasing complexity, namely (1) flow in a plane channel, (2) unsteady Couette flow, (3) flow caused by a moving lid over a 2D square cavity and (4) flow over a circular cylinder. For each of these flows, the present scheme is validated with the results from Navier-Stokes computations as well as lattice Boltzmann simulations on regular mesh. It is shown that the scheme is robust and accurate for the different test problems studied.
Resumo:
The results are presented of applying multi-time scale analysis using the singular perturbation technique for long time simulation of power system problems. A linear system represented in state-space form can be decoupled into slow and fast subsystems. These subsystems can be simulated with different time steps and then recombined to obtain the system response. Simulation results with a two-time scale analysis of a power system show a large saving in computational costs.
Resumo:
Large-scale chromosome rearrangements such as copy number variants (CNVs) and inversions encompass a considerable proportion of the genetic variation between human individuals. In a number of cases, they have been closely linked with various inheritable diseases. Single-nucleotide polymorphisms (SNPs) are another large part of the genetic variance between individuals. They are also typically abundant and their measuring is straightforward and cheap. This thesis presents computational means of using SNPs to detect the presence of inversions and deletions, a particular variety of CNVs. Technically, the inversion-detection algorithm detects the suppressed recombination rate between inverted and non-inverted haplotype populations whereas the deletion-detection algorithm uses the EM-algorithm to estimate the haplotype frequencies of a window with and without a deletion haplotype. As a contribution to population biology, a coalescent simulator for simulating inversion polymorphisms has been developed. Coalescent simulation is a backward-in-time method of modelling population ancestry. Technically, the simulator also models multiple crossovers by using the Counting model as the chiasma interference model. Finally, this thesis includes an experimental section. The aforementioned methods were tested on synthetic data to evaluate their power and specificity. They were also applied to the HapMap Phase II and Phase III data sets, yielding a number of candidates for previously unknown inversions, deletions and also correctly detecting known such rearrangements.