66 resultados para Numerical example
em University of Queensland eSpace - Australia
Resumo:
Market-based transmission expansion planning gives information to investors on where is the most cost efficient place to invest and brings benefits to those who invest in this grid. However, both market issue and power system adequacy problems are system planers’ concern. In this paper, a hybrid probabilistic criterion of Expected Economical Loss (EEL) is proposed as an index to evaluate the systems’ overall expected economical losses during system operation in a competitive market. It stands on both investors’ and planner’s point of view and will further improves the traditional reliability cost. By applying EEL, it is possible for system planners to obtain a clear idea regarding the transmission network’s bottleneck and the amount of losses arises from this weak point. Sequentially, it enables planners to assess the worth of providing reliable services. Also, the EEL will contain valuable information for moneymen to undertake their investment. This index could truly reflect the random behaviors of power systems and uncertainties from electricity market. The performance of the EEL index is enhanced by applying Normalized Coefficient of Probability (NCP), so it can be utilized in large real power systems. A numerical example is carried out on IEEE Reliability Test System (RTS), which will show how the EEL can predict the current system bottleneck under future operational conditions and how to use EEL as one of planning objectives to determine future optimal plans. A well-known simulation method, Monte Carlo simulation, is employed to achieve the probabilistic characteristic of electricity market and Genetic Algorithms (GAs) is used as a multi-objective optimization tool.
Resumo:
In this paper, genetic algorithm (GA) is applied to the optimum design of reinforced concrete liquid retaining structures, which comprise three discrete design variables, including slab thickness, reinforcement diameter and reinforcement spacing. GA, being a search technique based on the mechanics of natural genetics, couples a Darwinian survival-of-the-fittest principle with a random yet structured information exchange amongst a population of artificial chromosomes. As a first step, a penalty-based strategy is entailed to transform the constrained design problem into an unconstrained problem, which is appropriate for GA application. A numerical example is then used to demonstrate strength and capability of the GA in this domain problem. It is shown that, only after the exploration of a minute portion of the search space, near-optimal solutions are obtained at an extremely converging speed. The method can be extended to application of even more complex optimization problems in other domains.
Resumo:
A new lifetime distribution capable of modeling a bathtub-shaped hazard-rate function is proposed. The proposed model is derived as a limiting case of the Beta Integrated Model and has both the Weibull distribution and Type I extreme value distribution as special cases. The model can be considered as another useful 3-parameter generalization of the Weibull distribution. An advantage of the model is that the model parameters can be estimated easily based on a Weibull probability paper (WPP) plot that serves as a tool for model identification. Model characterization based on the WPP plot is studied. A numerical example is provided and comparison with another Weibull extension, the exponentiated Weibull, is also discussed. The proposed model compares well with other competing models to fit data that exhibits a bathtub-shaped hazard-rate function.
Resumo:
The numerical solution of the time dependent wave equation in an unbounded domain generally leads to a truncation of this domain, which requires the introduction of an artificial boundary with associated boundary conditions. Such nonreflecting conditions ensure the equivalence between the solution of the original problem in the unbounded region and the solution inside the artificial boundary. We consider the acoustic wave equation and derive exact transparent boundary conditions that are local in time and can be directly used in explicit methods. These conditions annihilate wave harmonics up to a given order on a spherical artificial boundary, and we show how to combine the derived boundary condition with a finite difference method. The analysis is complemented by a numerical example in two spatial dimensions that illustrates the usefulness and accuracy of transparent boundary conditions.
Resumo:
A new passive shim design method is presented which is based on a magnetization mapping approach. Well defined regions with similar magnetization values define the optimal number of passive shims, their shape and position. The new design method is applied in a shimming process without prior-axial shim localization; this reduces the possibility of introducing new errors. The new shim design methodology reduces the number of iterations and the quantity of material required to shim a magnet. Only a few iterations (1-5) are required to shim a whole body horizontal bore magnet with a manufacturing error tolerance larger than 0.1 mm and smaller than 0.5 mm. One numerical example is presented
Resumo:
Purpose - In many scientific and engineering fields, large-scale heat transfer problems with temperature-dependent pore-fluid densities are commonly encountered. For example, heat transfer from the mantle into the upper crust of the Earth is a typical problem of them. The main purpose of this paper is to develop and present a new combined methodology to solve large-scale heat transfer problems with temperature-dependent pore-fluid densities in the lithosphere and crust scales. Design/methodology/approach - The theoretical approach is used to determine the thickness and the related thermal boundary conditions of the continental crust on the lithospheric scale, so that some important information can be provided accurately for establishing a numerical model of the crustal scale. The numerical approach is then used to simulate the detailed structures and complicated geometries of the continental crust on the crustal scale. The main advantage in using the proposed combination method of the theoretical and numerical approaches is that if the thermal distribution in the crust is of the primary interest, the use of a reasonable numerical model on the crustal scale can result in a significant reduction in computer efforts. Findings - From the ore body formation and mineralization points of view, the present analytical and numerical solutions have demonstrated that the conductive-and-advective lithosphere with variable pore-fluid density is the most favorite lithosphere because it may result in the thinnest lithosphere so that the temperature at the near surface of the crust can be hot enough to generate the shallow ore deposits there. The upward throughflow (i.e. mantle mass flux) can have a significant effect on the thermal structure within the lithosphere. In addition, the emplacement of hot materials from the mantle may further reduce the thickness of the lithosphere. Originality/value - The present analytical solutions can be used to: validate numerical methods for solving large-scale heat transfer problems; provide correct thermal boundary conditions for numerically solving ore body formation and mineralization problems on the crustal scale; and investigate the fundamental issues related to thermal distributions within the lithosphere. The proposed finite element analysis can be effectively used to consider the geometrical and material complexities of large-scale heat transfer problems with temperature-dependent fluid densities.
Resumo:
MBCNS2 is a small collection of programs for the simulation of transient two-dimensional (or axisymmetric) flows. It is part of the larger collection of compressible flow simulation codes found at http://www.mech.uq.edu.au/cfcfd/. This manual is a collection of example simulations: scripts, results and commentary. It may be convenient for new users of the code to identify an example close to the situation that they wish to model and then adapt the scripts for that example.
Resumo:
A kinetic theory based Navier-Stokes solver has been implemented on a parallel supercomputer (Intel iPSC Touchstone Delta) to study the leeward flowfield of a blunt nosed delta wing at 30-deg incidence at hypersonic speeds (similar to the proposed HERMES aerospace plane). Computational results are presented for a series of grids for both inviscid and laminar viscous flows at Reynolds numbers of 225,000 and 2.25 million. In addition, comparisons are made between the present and two independent calculations of the some flows (by L. LeToullec and P. Guillen, and S. Menne) which were presented at the Workshop on Hypersonic Flows for Re-entry Problems, Antibes, France, 1991.
Resumo:
Comparisons are made between experimental measurements and numerical simulations of ionizing flows generated in a superorbital facility. Nitrogen, with a freestream velocity of around 10 km/s, was passed over a cylindrical model, and images were recorded using two-wavelength holographic interferometry. The resulting density, electron concentration, and temperature maps were compared with numerical simulations from the Langley Research Center aerothermodynamic upwind relaxation algorithm. The results showed generally good agreement in shock location and density distributions. Some discrepancies were observed for the electron concentration, possibly, because simulations were of a two-dimensional flow, whereas the experiments were likely to have small three-dimensional effects.
Resumo:
‘Living together on one’s own’ is the seemingly contradictory expression of the National Association of Housing Communities for Elderly People (LVGO) in The Netherlands which in fact captures the essence of cohousing. Cohousing is a novel kind of neighbourhood, housing a novel form of intentional community, which began to take shape in Denmark in the early to mid-1960s and, independently, in The Netherlands a few years later. The inventors of cohousing wanted to live in a much more communal or community-oriented neighbourhood than was usual, but they wanted to do so without sacrificing the privacy of individual families or households and their dwellings. Could they have their cake and eat it too? It would seem so. What is cohousing for older people (op-cohousing)? Op-cohousing is essentially no different, except for the differences in outlook or expectations, experience, interests and abilities that a particular, exclusively older, group of people have brought to this housing type. I discuss and analyse several communities in both countries.
Resumo:
Data mining is the process to identify valid, implicit, previously unknown, potentially useful and understandable information from large databases. It is an important step in the process of knowledge discovery in databases, (Olaru & Wehenkel, 1999). In a data mining process, input data can be structured, seme-structured, or unstructured. Data can be in text, categorical or numerical values. One of the important characteristics of data mining is its ability to deal data with large volume, distributed, time variant, noisy, and high dimensionality. A large number of data mining algorithms have been developed for different applications. For example, association rules mining can be useful for market basket problems, clustering algorithms can be used to discover trends in unsupervised learning problems, classification algorithms can be applied in decision-making problems, and sequential and time series mining algorithms can be used in predicting events, fault detection, and other supervised learning problems (Vapnik, 1999). Classification is among the most important tasks in the data mining, particularly for data mining applications into engineering fields. Together with regression, classification is mainly for predictive modelling. So far, there have been a number of classification algorithms in practice. According to (Sebastiani, 2002), the main classification algorithms can be categorized as: decision tree and rule based approach such as C4.5 (Quinlan, 1996); probability methods such as Bayesian classifier (Lewis, 1998); on-line methods such as Winnow (Littlestone, 1988) and CVFDT (Hulten 2001), neural networks methods (Rumelhart, Hinton & Wiliams, 1986); example-based methods such as k-nearest neighbors (Duda & Hart, 1973), and SVM (Cortes & Vapnik, 1995). Other important techniques for classification tasks include Associative Classification (Liu et al, 1998) and Ensemble Classification (Tumer, 1996).
Resumo:
The use of computational fluid dynamics simulations for calibrating a flush air data system is described, In particular, the flush air data system of the HYFLEX hypersonic vehicle is used as a case study. The HYFLEX air data system consists of nine pressure ports located flush with the vehicle nose surface, connected to onboard pressure transducers, After appropriate processing, surface pressure measurements can he converted into useful air data parameters. The processing algorithm requires an accurate pressure model, which relates air data parameters to the measured pressures. In the past, such pressure models have been calibrated using combinations of flight data, ground-based experimental results, and numerical simulation. We perform a calibration of the HYFLEX flush air data system using computational fluid dynamics simulations exclusively, The simulations are used to build an empirical pressure model that accurately describes the HYFLEX nose pressure distribution ol cr a range of flight conditions. We believe that computational fluid dynamics provides a quick and inexpensive way to calibrate the air data system and is applicable to a broad range of flight conditions, When tested with HYFLEX flight data, the calibrated system is found to work well. It predicts vehicle angle of attack and angle of sideslip to accuracy levels that generally satisfy flight control requirements. Dynamic pressure is predicted to within the resolution of the onboard inertial measurement unit. We find that wind-tunnel experiments and flight data are not necessary to accurately calibrate the HYFLEX flush air data system for hypersonic flight.
Resumo:
Krylov subspace techniques have been shown to yield robust methods for the numerical computation of large sparse matrix exponentials and especially the transient solutions of Markov Chains. The attractiveness of these methods results from the fact that they allow us to compute the action of a matrix exponential operator on an operand vector without having to compute, explicitly, the matrix exponential in isolation. In this paper we compare a Krylov-based method with some of the current approaches used for computing transient solutions of Markov chains. After a brief synthesis of the features of the methods used, wide-ranging numerical comparisons are performed on a power challenge array supercomputer on three different models. (C) 1999 Elsevier Science B.V. All rights reserved.AMS Classification: 65F99; 65L05; 65U05.
Resumo:
We present a numerical methodology for the study of convective pore-fluid, thermal and mass flow in fluid-saturated porous rock basins. lit particular, we investigate the occurrence and distribution pattern of temperature gradient driven convective pore-fluid flow and hydrocarbon transport in the Australian North West Shelf basin. The related numerical results have demonstrated that: (1) The finite element method combined with the progressive asymptotic approach procedure is a useful tool for dealing with temperature gradient driven pore-fluid flow and mass transport in fluid-saturated hydrothermal basins; (2) Convective pore-fluid flow generally becomes focused in more permeable layers, especially when the layers are thick enough to accommodate the appropriate convective cells; (3) Large dislocation of strata has a significant influence off the distribution patterns of convective pore;fluid flow, thermal flow and hydrocarbon transport in the North West Shelf basin; (4) As a direct consequence of the formation of convective pore-fluid cells, the hydrocarbon concentration is highly localized in the range bounded by two major faults in the basin.
Theoretical and numerical analyses of convective instability in porous media with upward throughflow
Resumo:
Exact analytical solutions have been obtained for a hydrothermal system consisting of a horizontal porous layer with upward throughflow. The boundary conditions considered are constant temperature, constant pressure at the top, and constant vertical temperature gradient, constant Darcy velocity at the bottom of the layer. After deriving the exact analytical solutions, we examine the stability of the solutions using linear stability theory and the Galerkin method. It has been found that the exact solutions for such a hydrothermal system become unstable when the Rayleigh number of the system is equal to or greater than the corresponding critical Rayleigh number. For small and moderate Peclet numbers (Pe less than or equal to 6), an increase in upward throughflow destabilizes the convective flow in the horizontal layer. To confirm these findings, the finite element method with the progressive asymptotic approach procedure is used to compute the convective cells in such a hydrothermal system. Copyright (C) 1999 John Wiley & Sons, Ltd.