99 resultados para Helicity method, subtraction method, numerical methods, random polarizations
Resumo:
Biologists are increasingly conscious of the critical role that noise plays in cellular functions such as genetic regulation, often in connection with fluctuations in small numbers of key regulatory molecules. This has inspired the development of models that capture this fundamentally discrete and stochastic nature of cellular biology - most notably the Gillespie stochastic simulation algorithm (SSA). The SSA simulates a temporally homogeneous, discrete-state, continuous-time Markov process, and of course the corresponding probabilities and numbers of each molecular species must all remain positive. While accurately serving this purpose, the SSA can be computationally inefficient due to very small time stepping so faster approximations such as the Poisson and Binomial τ-leap methods have been suggested. This work places these leap methods in the context of numerical methods for the solution of stochastic differential equations (SDEs) driven by Poisson noise. This allows analogues of Euler-Maruyuma, Milstein and even higher order methods to be developed through the Itô-Taylor expansions as well as similar derivative-free Runge-Kutta approaches. Numerical results demonstrate that these novel methods compare favourably with existing techniques for simulating biochemical reactions by more accurately capturing crucial properties such as the mean and variance than existing methods.
Resumo:
In this paper, a new differential evolution (DE) based power system optimal available transfer capability (ATC) assessment is presented. Power system total transfer capability (TTC) is traditionally solved by the repeated power flow (RPF) method and the continuation power flow (CPF) method. These methods are based on the assumption that the productions of the source area generators are increased in identical proportion to balance the load increment in the sink area. A new approach based on DE algorithm to generate optimal dispatch both in source area generators and sink area loads is proposed in this paper. This new method can compute ATC between two areas with significant improvement in accuracy compared with the traditional RPF and CPF based methods. A case study using a 30 bus system is given to verify the efficiency and effectiveness of this new DE based ATC optimization approach.
Resumo:
Computational Methods for Coupled Problems in Science and Engineering
Resumo:
Power system small signal stability analysis aims to explore different small signal stability conditions and controls, namely: (1) exploring the power system security domains and boundaries in the space of power system parameters of interest, including load flow feasibility, saddle node and Hopf bifurcation ones; (2) finding the maximum and minimum damping conditions; and (3) determining control actions to provide and increase small signal stability. These problems are presented in this paper as different modifications of a general optimization to a minimum/maximum, depending on the initial guesses of variables and numerical methods used. In the considered problems, all the extreme points are of interest. Additionally, there are difficulties with finding the derivatives of the objective functions with respect to parameters. Numerical computations of derivatives in traditional optimization procedures are time consuming. In this paper, we propose a new black-box genetic optimization technique for comprehensive small signal stability analysis, which can effectively cope with highly nonlinear objective functions with multiple minima and maxima, and derivatives that can not be expressed analytically. The optimization result can then be used to provide such important information such as system optimal control decision making, assessment of the maximum network's transmission capacity, etc. (C) 1998 Elsevier Science S.A. All rights reserved.
Resumo:
The movement of chemicals through the soil to the groundwater or discharged to surface waters represents a degradation of these resources. In many cases, serious human and stock health implications are associated with this form of pollution. The chemicals of interest include nutrients, pesticides, salts, and industrial wastes. Recent studies have shown that current models and methods do not adequately describe the leaching of nutrients through soil, often underestimating the risk of groundwater contamination by surface-applied chemicals, and overestimating the concentration of resident solutes. This inaccuracy results primarily from ignoring soil structure and nonequilibrium between soil constituents, water, and solutes. A multiple sample percolation system (MSPS), consisting of 25 individual collection wells, was constructed to study the effects of localized soil heterogeneities on the transport of nutrients (NO3-, Cl-, PO43-) in the vadose zone of an agricultural soil predominantly dominated by clay. Very significant variations in drainage patterns across a small spatial scale were observed tone-way ANOVA, p < 0.001) indicating considerable heterogeneity in water flow patterns and nutrient leaching. Using data collected from the multiple sample percolation experiments, this paper compares the performance of two mathematical models for predicting solute transport, the advective-dispersion model with a reaction term (ADR), and a two-region preferential flow model (TRM) suitable for modelling nonequilibrium transport. These results have implications for modelling solute transport and predicting nutrient loading on a larger scale. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
It is not possible to make measurements of the phase of an optical mode using linear optics without introducing an extra phase uncertainty. This extra phase variance is quite large for heterodyne measurements, however it is possible to reduce it to the theoretical limit of log (n) over bar (4 (n) over bar (2)) using adaptive measurements. These measurements are quite sensitive to experimental inaccuracies, especially time delays and inefficient detectors. Here it is shown that the minimum introduced phase variance when there is a time delay of tau is tau/(8 (n) over bar). This result is verified numerically, showing that the phase variance introduced approaches this limit for most of the adaptive schemes using the best final phase estimate. The main exception is the adaptive mark II scheme with simplified feedback, which is extremely sensitive to time delays. The extra phase variance due to time delays is considered for the mark I case with simplified feedback, verifying the tau /2 result obtained by Wiseman and Killip both by a more rigorous analytic technique and numerically.
Resumo:
We study the continuous problem y"=f(x,y,y'), xc[0,1], 0=G((y(0),y(1)),(y'(0), y'(1))), and its discrete approximation (y(k+1)-2y(k)+y(k-1))/h(2) =f(t(k), y(k), v(k)), k = 1,..., n-1, 0 = G((y(0), y(n)), (v(1), v(n))), where f and G = (g(0), g(1)) are continuous and fully nonlinear, h = 1/n, v(k) = (y(k) - y(k-1))/h, for k =1,..., n, and t(k) = kh, for k = 0,...,n. We assume there exist strict lower and strict upper solutions and impose additional conditions on f and G which are known to yield a priori bounds on, and to guarantee the existence of solutions of the continuous problem. We show that the discrete approximation also has solutions which approximate solutions of the continuous problem and converge to the solution of the continuous problem when it is unique, as the grid size goes to 0. Homotopy methods can be used to compute the solution of the discrete approximation. Our results were motivated by those of Gaines.
Resumo:
Stochastic differential equations (SDEs) arise from physical systems where the parameters describing the system can only be estimated or are subject to noise. Much work has been done recently on developing higher order Runge-Kutta methods for solving SDEs numerically. Fixed stepsize implementations of numerical methods have limitations when, for example, the SDE being solved is stiff as this forces the stepsize to be very small. This paper presents a completely general variable stepsize implementation of an embedded Runge Kutta pair for solving SDEs numerically; in this implementation, there is no restriction on the value used for the stepsize, and it is demonstrated that the integration remains on the correct Brownian path.
Resumo:
Results of the benchmark test are presented of comparing numerical schemes solving shock wave of M-s = 2.38 in nitrogen and argon interacting with a 43 degrees semi-apex angle cone and corresponding experiments. The benchmark test was announced in Shock Waves Vol. 12, No. 4, in which we tried to clarify the effects of viscosity and heat conductivity on shock reflection in conical flows. This paper summarizes results of ten numerical and two experimental applications. State of the art in studies regarding the shock/cone interaction is clarified.
Resumo:
This paper investigates the performance analysis of separation of mutually independent sources in nonlinear models. The nonlinear mapping constituted by an unsupervised linear mixture is followed by an unknown and invertible nonlinear distortion, are found in many signal processing cases. Generally, blind separation of sources from their nonlinear mixtures is rather difficult. We propose using a kernel density estimator incorporated with equivariant gradient analysis to separate the sources with nonlinear distortion. The kernel density estimator parameters of which are iteratively updated to minimize the output independence expressed as a mutual information criterion. The equivariant gradient algorithm has the form of nonlinear decorrelation to perform the convergence analysis. Experiments are proposed to illustrate these results.
Resumo:
The developments of models in Earth Sciences, e.g. for earthquake prediction and for the simulation of mantel convection, are fare from being finalized. Therefore there is a need for a modelling environment that allows scientist to implement and test new models in an easy but flexible way. After been verified, the models should be easy to apply within its scope, typically by setting input parameters through a GUI or web services. It should be possible to link certain parameters to external data sources, such as databases and other simulation codes. Moreover, as typically large-scale meshes have to be used to achieve appropriate resolutions, the computational efficiency of the underlying numerical methods is important. Conceptional this leads to a software system with three major layers: the application layer, the mathematical layer, and the numerical algorithm layer. The latter is implemented as a C/C++ library to solve a basic, computational intensive linear problem, such as a linear partial differential equation. The mathematical layer allows the model developer to define his model and to implement high level solution algorithms (e.g. Newton-Raphson scheme, Crank-Nicholson scheme) or choose these algorithms form an algorithm library. The kernels of the model are generic, typically linear, solvers provided through the numerical algorithm layer. Finally, to provide an easy-to-use application environment, a web interface is (semi-automatically) built to edit the XML input file for the modelling code. In the talk, we will discuss the advantages and disadvantages of this concept in more details. We will also present the modelling environment escript which is a prototype implementation toward such a software system in Python (see www.python.org). Key components of escript are the Data class and the PDE class. Objects of the Data class allow generating, holding, accessing, and manipulating data, in such a way that the actual, in the particular context best, representation is transparent to the user. They are also the key to establish connections with external data sources. PDE class objects are describing (linear) partial differential equation objects to be solved by a numerical library. The current implementation of escript has been linked to the finite element code Finley to solve general linear partial differential equations. We will give a few simple examples which will illustrate the usage escript. Moreover, we show the usage of escript together with Finley for the modelling of interacting fault systems and for the simulation of mantel convection.
Resumo:
This paper describes a hybrid numerical method for the design of asymmetric magnetic resonance imaging magnet systems. The problem is formulated as a field synthesis and the desired current density on the surface of a cylinder is first calculated by solving a Fredholm equation of the first kind. Nonlinear optimization methods are then invoked to fit practical magnet coils to the desired current density. The field calculations are performed using a semi-analytical method. A new type of asymmetric magnet is proposed in this work. The asymmetric MRI magnet allows the diameter spherical imaging volume to be positioned close to one end of the magnet. The main advantages of making the magnet asymmetric include the potential to reduce the perception of claustrophobia for the patient, better access to the patient by attending physicians, and the potential for reduced peripheral nerve stimulation due to the gradient coil configuration. The results highlight that the method can be used to obtain an asymmetric MRI magnet structure and a very homogeneous magnetic field over the central imaging volume in clinical systems of approximately 1.2 m in length. Unshielded designs are the focus of this work. This method is flexible and may be applied to magnets of other geometries. (C) 1999 Academic Press.
Resumo:
Computer-aided tomography has been used for many years to provide significant information about the internal properties of an object, particularly in the medical fraternity. By reconstructing one-dimensional (ID) X-ray images, 2D cross-sections and 3D renders can provide a wealth of information about an object's internal structure. An extension of the methodology is reported here to enable the characterization of a model agglomerate structure. It is demonstrated that methods based on X-ray microtomography offer considerable potential in the validation and utilization of distinct element method simulations also examined.
Resumo:
This study compared an enzyme-linked immunosorbent assay (ELISA) to a liquid chromatography-tandem mass spectrometry (LC/MS/MS) technique for measurement of tacrolimus concentrations in adult kidney and liver transplant recipients, and investigated how assay choice influenced pharmacokinetic parameter estimates and drug dosage decisions. Tacrolimus concentrations measured by both ELISA and LC/MS/MS from 29 kidney (n = 98 samples) and 27 liver (n = 97 samples) transplant recipients were used to evaluate the performance of these methods in the clinical setting. Tacrolimus concentrations measured by the two techniques were compared via regression analysis. Population pharmacokinetic models were developed independently using ELISA and LC/MS/MS data from 76 kidney recipients. Derived kinetic parameters were used to formulate typical dosing regimens for concentration targeting. Dosage recommendations for the two assays were compared. The relation between LC/MS/MS and ELISA measurements was best described by the regression equation ELISA = 1.02 . (LC/MS/MS) + 0.14 in kidney recipients, and ELISA = 1.12 . (LC/MS/MS) - 0.87 in liver recipients. ELISA displayed less accuracy than LC/MS/MS at lower tacrolimus concentrations. Population pharmacokinetic models based on ELISA and LC/MS/MS data were similar with residual random errors of 4.1 ng/mL and 3.7 ng/mL, respectively. Assay choice gave rise to dosage prediction differences ranging from 0% to 30%. ELISA measurements of tacrolimus are not automatically interchangeable with LC/MS/MS values. Assay differences were greatest in adult liver recipients, probably reflecting periods of liver dysfunction and impaired biliary secretion of metabolites. While the majority of data collected in this study suggested assay differences in adult kidney recipients were minimal, findings of ELISA dosage underpredictions of up to 25% in the long term must be investigated further.