43 resultados para Large-scale nonlinear systems
Resumo:
The BR algorithm is a novel and efficient method to find all eigenvalues of upper Hessenberg matrices and has never been applied to eigenanalysis for power system small signal stability. This paper analyzes differences between the BR and the QR algorithms with performance comparison in terms of CPU time based on stopping criteria and storage requirement. The BR algorithm utilizes accelerating strategies to improve its performance when computing eigenvalues of narrowly banded, nearly tridiagonal upper Hessenberg matrices. These strategies significantly reduce the computation time at a reasonable level of precision. Compared with the QR algorithm, the BR algorithm requires fewer iteration steps and less storage space without depriving of appropriate precision in solving eigenvalue problems of large-scale power systems. Numerical examples demonstrate the efficiency of the BR algorithm in pursuing eigenanalysis tasks of 39-, 68-, 115-, 300-, and 600-bus systems. Experiment results suggest that the BR algorithm is a more efficient algorithm for large-scale power system small signal stability eigenanalysis.
Resumo:
We use the finite element method to solve coupled problems between pore-fluid flow and heat transfer in fluid-saturated porous rocks. In particular, we investigate the effects of both the hot pluton intrusion and topographically driven horizontal flow on the distributions of the pore-flow velocity and temperature in large-scale hydrothermal systems. Since general mineralization patterns are strongly dependent on distributions of both the pore-fluid velocity and temperature fields, the modern mineralization theory has been used to predict the general mineralization patterns in several realistic hydrothermal systems. The related numerical results have demonstrated that: (1) The existence of a hot intrusion can cause an increase in the maximum value of the pore-fluid velocity in the hydrothermal system. (2) The permeability of an intruded pluton is one of the sensitive parameters to control the pore-fluid flow, heat transfer and ore body formation in hydrothermal systems. (3) The maximum value of the pore-fluid velocity increases when the bottom temperature of the hydrothermal system is increased. (4) The topographically driven flow has significant effects on the pore-fluid flow, temperature distribution and precipitation pattern of minerals in hydrothermal systems. (5) The size of the computational domain may have some effects on the pore-fluid flow and heat transfer, indicating that the size of a hydrothermal system may affect the pore-fluid flow and heat transfer within the system. (C) 2003 Elsevier Science B.V. All rights reserved.
Resumo:
Many large-scale stochastic systems, such as telecommunications networks, can be modelled using a continuous-time Markov chain. However, it is frequently the case that a satisfactory analysis of their time-dependent, or even equilibrium, behaviour is impossible. In this paper, we propose a new method of analyzing Markovian models, whereby the existing transition structure is replaced by a more amenable one. Using rates of transition given by the equilibrium expected rates of the corresponding transitions of the original chain, we are able to approximate its behaviour. We present two formulations of the idea of expected rates. The first provides a method for analysing time-dependent behaviour, while the second provides a highly accurate means of analysing equilibrium behaviour. We shall illustrate our approach with reference to a variety of models, giving particular attention to queueing and loss networks. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
Published polymer distribution data for aqueous poly(ethylene glycol)/dextran mixtures have been reassessed to illustrate the feasibility of their quantitative characterization in terms of the Flory-Huggins theory of polymer thermodynamics. Phase diagrams predicted by this characterization procedure provide better descriptions of the experimental data than those based on an earlier, oversimplified treatment in similar terms. (C) 2003 Wiley Periodicals, Inc.
Resumo:
Most magnetic resonance imaging (MRI) spatial encoding techniques employ low-frequency pulsed magnetic field gradients that undesirably induce multiexponentially decaying eddy currents in nearby conducting structures of the MRI system. The eddy currents degrade the switching performance of the gradient system, distort the MRI image, and introduce thermal loads in the cryostat vessel and superconducting MRI components. Heating of superconducting magnets due to induced eddy currents is particularly problematic as it offsets the superconducting operating point, which can cause a system quench. A numerical characterization of transient eddy current effects is vital for their compensation/control and further advancement of the MRI technology as a whole. However, transient eddy current calculations are particularly computationally intensive. In large-scale problems, such as gradient switching in MRI, conventional finite-element method (FEM)-based routines impose very large computational loads during generation/solving of the system equations. Therefore, other computational alternatives need to be explored. This paper outlines a three-dimensional finite-difference time-domain (FDTD) method in cylindrical coordinates for the modeling of low-frequency transient eddy currents in MRI, as an extension to the recently proposed time-harmonic scheme. The weakly coupled Maxwell's equations are adapted to the low-frequency regime by downscaling the speed of light constant, which permits the use of larger FDTD time steps while maintaining the validity of the Courant-Friedrich-Levy stability condition. The principal hypothesis of this work is that the modified FDTD routine can be employed to analyze pulsed-gradient-induced, transient eddy currents in superconducting MRI system models. The hypothesis is supported through a verification of the numerical scheme on a canonical problem and by analyzing undesired temporal eddy current effects such as the B-0-shift caused by actively shielded symmetric/asymmetric transverse x-gradient head and unshielded z-gradient whole-body coils operating in proximity to a superconducting MRI magnet.
Resumo:
The developments of models in Earth Sciences, e.g. for earthquake prediction and for the simulation of mantel convection, are fare from being finalized. Therefore there is a need for a modelling environment that allows scientist to implement and test new models in an easy but flexible way. After been verified, the models should be easy to apply within its scope, typically by setting input parameters through a GUI or web services. It should be possible to link certain parameters to external data sources, such as databases and other simulation codes. Moreover, as typically large-scale meshes have to be used to achieve appropriate resolutions, the computational efficiency of the underlying numerical methods is important. Conceptional this leads to a software system with three major layers: the application layer, the mathematical layer, and the numerical algorithm layer. The latter is implemented as a C/C++ library to solve a basic, computational intensive linear problem, such as a linear partial differential equation. The mathematical layer allows the model developer to define his model and to implement high level solution algorithms (e.g. Newton-Raphson scheme, Crank-Nicholson scheme) or choose these algorithms form an algorithm library. The kernels of the model are generic, typically linear, solvers provided through the numerical algorithm layer. Finally, to provide an easy-to-use application environment, a web interface is (semi-automatically) built to edit the XML input file for the modelling code. In the talk, we will discuss the advantages and disadvantages of this concept in more details. We will also present the modelling environment escript which is a prototype implementation toward such a software system in Python (see www.python.org). Key components of escript are the Data class and the PDE class. Objects of the Data class allow generating, holding, accessing, and manipulating data, in such a way that the actual, in the particular context best, representation is transparent to the user. They are also the key to establish connections with external data sources. PDE class objects are describing (linear) partial differential equation objects to be solved by a numerical library. The current implementation of escript has been linked to the finite element code Finley to solve general linear partial differential equations. We will give a few simple examples which will illustrate the usage escript. Moreover, we show the usage of escript together with Finley for the modelling of interacting fault systems and for the simulation of mantel convection.
Resumo:
In natural estuaries, contaminant transport is driven by the turbulent momentum mixing. The predictions of scalar dispersion can rarely be predicted accurately because of a lack of fundamental understanding of the turbulence structure in estuaries. Herein detailed turbulence field measurements were conducted at high frequency and continuously for up to 50 hours per investigation in a small subtropical estuary with semi-diurnal tides. Acoustic Doppler velocimetry was deemed the most appropriate measurement technique for such small estuarine systems with shallow water depths (less than 0.5 m at low tides), and a thorough post-processing technique was applied. The estuarine flow is always a fluctuating process. The bulk flow parameters fluctuated with periods comparable to tidal cycles and other large-scale processes. But turbulence properties depended upon the instantaneous local flow properties. They were little affected by the flow history, but their structure and temporal variability were influenced by a variety of mechanisms. This resulted in behaviour which deviated from that for equilibrium turbulent boundary layer induced by velocity shear only. A striking feature of the data sets is the large fluctuations in all turbulence characteristics during the tidal cycle. This feature was rarely documented, but an important difference between the data sets used in this study from earlier reported measurements is that the present data were collected continuously at high frequency during relatively long periods. The findings bring new lights in the fluctuating nature of momentum exchange coefficients and integral time and length scales. These turbulent properties should not be assumed constant.
Resumo:
Objective: To assess consent to record linkage, describe the characteristics of consenters and compare self-report versus Medicare records of general practitioner use. Method. Almost 40,000 women in the Australian Longitudinal Study on Women's Health were sent a request by mail for permission to link their Medicare records and survey data. Results: 19,700 women consented: 37% of young (18-23 years), 59% of mid-age (4550 years) and 53% of older women (70-75 years). Consenters tended to have higher levels of education and, among the older cohort, were in better health than nonconsenters. Women tended to under-report the number of visits to general practitioners. Conclusions: Record linkage of survey and Medicare data on a large scale is feasible. The linked data provide information on health and socio-economic status which are valuable for understanding health service utilisation. Implications: Linked records provide a powerful tool for health care research, particularly in longitudinal studies.