946 resultados para Numerical error
Resumo:
For a two layered long wave propagation, linearized governing equations, which were derived earlier from the Euler equations of mass and momentum assuming negligible friction and interfacial mixing are solved analytically using Fourier transform. For the solution, variations of upper layer water level is assumed to be sinosoidal having known amplitude and variations of interface level is solved. As the governing equations are too complex to solve it analytically, density of upper layer fluid is assumed as very close to the density of lower layer fluid to simplify the lower layer equation. A numerical model is developed using the staggered leap-forg scheme for computation of water level and discharge in one dimensional propagation having known amplitude for the variations of upper layer water level and interface level to be solved. For the numerical model, water levels (upper layer and interface) at both the boundaries are assumed to be known from analytical solution. Results of numerical model are verified by comparing with the analytical solutions for different time period. Good agreements between analytical solution and numerical model are found for the stated boundary condition. The reliability of the developed numerical model is discussed, using it for different a (ratio of density of fluid in the upper layer to that in the lower layer) and p (ratio of water depth in the lower layer to that in the upper layer) values. It is found that as ‘CX’ increases amplification of interface also increases for same upper layer amplitude. Again for a constant lower layer depth, as ‘p’ increases amplification of interface. also increases for same upper layer amplitude.
Resumo:
In the design of lattice domes, design engineers need expertise in areas such as configuration processing, nonlinear analysis, and optimization. These are extensive numerical, iterative, and lime-consuming processes that are prone to error without an integrated design tool. This article presents the application of a knowledge-based system in solving lattice-dome design problems. An operational prototype knowledge-based system, LADOME, has been developed by employing the combined knowledge representation approach, which uses rules, procedural methods, and an object-oriented blackboard concept. The system's objective is to assist engineers in lattice-dome design by integrating all design tasks into a single computer-aided environment with implementation of the knowledge-based system approach. For system verification, results from design examples are presented.
Resumo:
Activated sludge models are used extensively in the study of wastewater treatment processes. While various commercial implementations of these models are available, there are many people who need to code models themselves using the simulation packages available to them, Quality assurance of such models is difficult. While benchmarking problems have been developed and are available, the comparison of simulation data with that of commercial models leads only to the detection, not the isolation of errors. To identify the errors in the code is time-consuming. In this paper, we address the problem by developing a systematic and largely automated approach to the isolation of coding errors. There are three steps: firstly, possible errors are classified according to their place in the model structure and a feature matrix is established for each class of errors. Secondly, an observer is designed to generate residuals, such that each class of errors imposes a subspace, spanned by its feature matrix, on the residuals. Finally. localising the residuals in a subspace isolates coding errors. The algorithm proved capable of rapidly and reliably isolating a variety of single and simultaneous errors in a case study using the ASM 1 activated sludge model. In this paper a newly coded model was verified against a known implementation. The method is also applicable to simultaneous verification of any two independent implementations, hence is useful in commercial model development.
Resumo:
We describe in detail the theory underpinning the measurement of density matrices of a pair of quantum two-level systems (qubits). Our particular emphasis is on qubits realized by the two polarization degrees of freedom of a pair of entangled photons generated in a down-conversion experiment; however, the discussion applies in general, regardless of the actual physical realization. Two techniques are discussed, namely, a tomographic reconstruction (in which the density matrix is linearly related to a set of measured quantities) and a maximum likelihood technique which requires numerical optimization (but has the advantage of producing density matrices that are always non-negative definite). In addition, a detailed error analysis is presented, allowing errors in quantities derived from the density matrix, such as the entropy or entanglement of formation, to be estimated. Examples based on down-conversion experiments are used to illustrate our results.
Resumo:
A new algorithm has been developed for smoothing the surfaces in finite element formulations of contact-impact. A key feature of this method is that the smoothing is done implicitly by constructing smooth signed distance functions for the bodies. These functions are then employed for the computation of the gap and other variables needed for implementation of contact-impact. The smoothed signed distance functions are constructed by a moving least-squares approximation with a polynomial basis. Results show that when nodes are placed on a surface, the surface can be reproduced with an error of about one per cent or less with either a quadratic or a linear basis. With a quadratic basis, the method exactly reproduces a circle or a sphere even for coarse meshes. Results are presented for contact problems involving the contact of circular bodies. Copyright (C) 2002 John Wiley Sons, Ltd.
Resumo:
Combinatorial optimization problems share an interesting property with spin glass systems in that their state spaces can exhibit ultrametric structure. We use sampling methods to analyse the error surfaces of feedforward multi-layer perceptron neural networks learning encoder problems. The third order statistics of these points of attraction are examined and found to be arranged in a highly ultrametric way. This is a unique result for a finite, continuous parameter space. The implications of this result are discussed.
Resumo:
The flow field and the energy transport near thermoacoustic couples are simulated using a 2D full Navier-Stokes solver. The thermoacoustic couple plate is maintained at a constant temperature; plate lengths, which are short and long compared with the particle displacement lengths of the acoustic standing waves, are tested. Also investigated are the effects of plate spacing and the amplitude of the standing wave. Results are examined in the form of energy vectors, particle paths, and overall entropy generation rates. These show that a net heat-pumping effect appears only near the edges of thermoacoustic couple plates, within about a particle displacement distance from the ends. A heat-pumping effect can be seen even on the shortest plates tested when the plate spacing exceeds the thermal penetration depth. It is observed that energy dissipation near the plate increases quadratically as the plate spacing is reduced. The results also indicate that there may be a larger scale vortical motion outside the plates which disappears as the plate spacing is reduced. (C) 2002 Acoustical Society of America.
Resumo:
The choice of genotyping families vs unrelated individuals is a critical factor in any large-scale linkage disequilibrium (LD) study. The use of unrelated individuals for such studies is promising, but in contrast to family designs, unrelated samples do not facilitate detection of genotyping errors, which have been shown to be of great importance for LD and linkage studies and may be even more important in genotyping collaborations across laboratories. Here we employ some of the most commonly-used analysis methods to examine the relative accuracy of haplotype estimation using families vs unrelateds in the presence of genotyping error. The results suggest that even slight amounts of genotyping error can significantly decrease haplotype frequency and reconstruction accuracy, that the ability to detect such errors in large families is essential when the number/complexity of haplotypes is high (low LD/common alleles). In contrast, in situations of low haplotype complexity (high LD and/or many rare alleles) unrelated individuals offer such a high degree of accuracy that there is little reason for less efficient family designs. Moreover, parent-child trios, which comprise the most popular family design and the most efficient in terms of the number of founder chromosomes per genotype but which contain little information for error detection, offer little or no gain over unrelated samples in nearly all cases, and thus do not seem a useful sampling compromise between unrelated individuals and large families. The implications of these results are discussed in the context of large-scale LD mapping projects such as the proposed genome-wide haplotype map.