976 resultados para conduct problems
Resumo:
An augmented immersed interface method (IIM) is proposed for simulating one-phase moving contact line problems in which a liquid drop spreads or recoils on a solid substrate. While the present two-dimensional mathematical model is a free boundary problem, in our new numerical method, the fluid domain enclosed by the free boundary is embedded into a rectangular one so that the problem can be solved by a regular Cartesian grid method. We introduce an augmented variable along the free boundary so that the stress balancing boundary condition is satisfied. A hybrid time discretization is used in the projection method for better stability. The resultant Helmholtz/Poisson equations with interfaces then are solved by the IIM in an efficient way. Several numerical tests including an accuracy check, and the spreading and recoiling processes of a liquid drop are presented in detail. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
Numerical simulations of the multi-shock interactions observable around hypersonic vehicles were carried out by solving Navier-Stokes equations with the AUSMPW scheme and the new type of the IV interaction created by two incident shock waves was investigated in detail. Numerical results show that the intersection point of the second incident shock with the bow shock plays important role on the flow pattern, peak pressures and heat fluxes. In the case of two incident shocks interacting with the bow shock at the same position, the much higher peak pressure and more severe heat transfer rate are induced than the classical IV interaction. The phenomenon is referred to as the multi-shock interaction and higher requirements will be imposed on thermal protection systems.
Resumo:
IEEE Computer Society; International Association for; Computer and Information Science, ACIS
Resumo:
South Central University
Resumo:
A fractional-step method of predictor-corrector difference-pseudospectrum with unconditional L(2)-stability and exponential convergence is presented. The stability and convergence of this method is strictly proved mathematically for a nonlinear convection-dominated flow. The error estimation is given and the superiority of this method is verified by numerical test.
Resumo:
数值模式是潮波研究的一种有利手段,但在研究中会面临各种具体问题,包括开边界条件的确定、底摩擦系数和耗散系数的选取等。数据同化是解决这些问题的一种途径,即利用有限数量的潮汐观测资料对潮波进行最优估计,其根本目的是迫使模型预报值逼近观测值,使模式不要偏离实际情况太远。本文采用了一种优化开边界方法,沿着数值模型的开边界优化潮汐水位信息,目的是设法使数值解在动力约束的意义下接近观测值,获得研究区域的潮汐结果。边界值由指定优化问题的解来定,以提高模拟区域的潮汐精度,最优问题的解是基于通过开边界的能量通量的变化,处理开边界处的观测值与计算值之差的最小化。这里提供了辐射型边界条件,由Reid 和Bodine(本文简称为RB)推导,我们将采用的优化后的RB方法(称为ORB)是优化开边界的特殊情况。 本文对理想矩形海域( E- E, N- N, 分辨率 )进行了潮波模拟,有东部开边界,模式采用ECOM3D模式。对数据结果的误差分析采用,振幅平均偏差,平均绝对偏差,平均相对误差和均方根偏差四个值来衡量模拟结果的好坏程度。 需要优化入开边界的解析潮汐值本文采用的解析解由方国洪《海湾的潮汐与潮流》(1966年)方法提供,为验证本文所做的解析解和方文的一致,本文做了其第一个例子的关键值a,b,z,结果与其结果吻合的相当好。但略有差别,分析的可能原因是两法在具体迭代方案和计算机保留小数上有区别造成微小误差。另外,我们取m=20,得到更精确的数值,我们发现对前十项的各项参数值,取m=10,m=20各项参数略有改进。当然我们可以获得m更大的各项参数值。 同时为了检验解析解的正确性讨论m和l变化对边界值的影响,结果指出,增大m,m=20时,u的模最大在本身u1或u2的模的6%;m=100时,u的模最大在本身u1或u2的模的4%;m再增大,m=1000时,u的模最大在本身u1或u2的模的4%,改变不大。当l<1时, =0处u的模最大为2。当l=1时, =0处u的模最大为0.1,当l>1时,l越大,u的模越小,当l=10时,u的模最大为0.001,可以认为为0。 为检验该优化方法的应用情况,我们对理想矩形区域进行模拟,首先将本文所采用的优化开边界方法应用于30m的情况,在开边界优化入开边界得出模式解,所得模拟结果与解析解吻合得相当好,该模式解和解析解在整个区域上,振幅平均绝对偏差为9.9cm,相位平均绝对偏差只有4.0 ,均方根偏差只有13.3cm,说明该优化方法在潮波模型中有效。 为验证该优化方法在各种条件下的模拟结果情况,在下面我们做了三类敏感性试验: 第一类试验:为证明在开边界上使用优化方法相比于没有采用优化方法的模拟解更接近于解析解,我们来比较ORB条件与RB条件的优劣,我们模拟用了两个不同的摩擦系数,k分别为:0,0.00006。 结果显示,针对不同摩擦系数,显示在开边界上使用ORB条件的解比使用RB条件的解无论是振幅还是相位都有显著改善,两个试验均方根偏差优化程度分别为84.3%,83.7%。说明在开边界上使用优化方法相比于没有采用优化方法的模拟解更接近于解析解,大大提高了模拟水平。上述的两个试验得出, k=0.00006优化结果比k=0的好。 第二类试验,使用ORB条件确定优化开边界情况下,在东西边界加入出入流的情况,流考虑线性和非线性情况,结果显示,加入流的情况,潮汐模拟的效果降低不少,流为1Sv的情况要比5Sv的情况均方根偏差相差20cm,而不加流的情况只有0.2cm。线性流和非线性流情况两者模式解相差不大,振幅,相位各项指数都相近, 说明流的线性与否对结果影响不大。 第三类试验,不仅在开边界使用ORB条件,在模式内部也使用ORB条件,比较了内部优化和不优化情况与解析解的偏差。结果显示,选用不同的k,振幅都能得到很好的模拟,而相位相对较差。另外,在内部优化的情况下,考虑不同的k的模式解, 我们选用了与解析解相近的6个模式解的k,结果显示,不同的k,振幅都能得到很好的模拟,而相位较差。 总之,在开边界使用ORB条件比使用RB条件好,振幅相位都有大幅度改进,在加入出入流情况下,流的大小对模拟结果有影响,但线形流和非线性流差别不大。内部优化的结果显示,模式采用不同的k都能很好模拟解析解的振幅。
Resumo:
This thesis investigates a new approach to lattice basis reduction suggested by M. Seysen. Seysen's algorithm attempts to globally reduce a lattice basis, whereas the Lenstra, Lenstra, Lovasz (LLL) family of reduction algorithms concentrates on local reductions. We show that Seysen's algorithm is well suited for reducing certain classes of lattice bases, and often requires much less time in practice than the LLL algorithm. We also demonstrate how Seysen's algorithm for basis reduction may be applied to subset sum problems. Seysen's technique, used in combination with the LLL algorithm, and other heuristics, enables us to solve a much larger class of subset sum problems than was previously possible.
Resumo:
There has been much interest in the area of model-based reasoning within the Artificial Intelligence community, particularly in its application to diagnosis and troubleshooting. The core issue in this thesis, simply put, is, model-based reasoning is fine, but whence the model? Where do the models come from? How do we know we have the right models? What does the right model mean anyway? Our work has three major components. The first component deals with how we determine whether a piece of information is relevant to solving a problem. We have three ways of determining relevance: derivational, situational and an order-of-magnitude reasoning process. The second component deals with the defining and building of models for solving problems. We identify these models, determine what we need to know about them, and importantly, determine when they are appropriate. Currently, the system has a collection of four basic models and two hybrid models. This collection of models has been successfully tested on a set of fifteen simple kinematics problems. The third major component of our work deals with how the models are selected.
Resumo:
This report describes a paradigm for combining associational and causal reasoning to achieve efficient and robust problem-solving behavior. The Generate, Test and Debug (GTD) paradigm generates initial hypotheses using associational (heuristic) rules. The tester verifies hypotheses, supplying the debugger with causal explanations for bugs found if the test fails. The debugger uses domain-independent causal reasoning techniques to repair hypotheses, analyzing domain models and the causal explanations produced by the tester to determine how to replace faulty assumptions made by the generator. We analyze the strengths and weaknesses of associational and causal reasoning techniques, and present a theory of debugging plans and interpretations. The GTD paradigm has been implemented and tested in the domains of geologic interpretation, the blocks world, and Tower of Hanoi problems.
Resumo:
In this thesis we study the general problem of reconstructing a function, defined on a finite lattice from a set of incomplete, noisy and/or ambiguous observations. The goal of this work is to demonstrate the generality and practical value of a probabilistic (in particular, Bayesian) approach to this problem, particularly in the context of Computer Vision. In this approach, the prior knowledge about the solution is expressed in the form of a Gibbsian probability distribution on the space of all possible functions, so that the reconstruction task is formulated as an estimation problem. Our main contributions are the following: (1) We introduce the use of specific error criteria for the design of the optimal Bayesian estimators for several classes of problems, and propose a general (Monte Carlo) procedure for approximating them. This new approach leads to a substantial improvement over the existing schemes, both regarding the quality of the results (particularly for low signal to noise ratios) and the computational efficiency. (2) We apply the Bayesian appraoch to the solution of several problems, some of which are formulated and solved in these terms for the first time. Specifically, these applications are: teh reconstruction of piecewise constant surfaces from sparse and noisy observationsl; the reconstruction of depth from stereoscopic pairs of images and the formation of perceptual clusters. (3) For each one of these applications, we develop fast, deterministic algorithms that approximate the optimal estimators, and illustrate their performance on both synthetic and real data. (4) We propose a new method, based on the analysis of the residual process, for estimating the parameters of the probabilistic models directly from the noisy observations. This scheme leads to an algorithm, which has no free parameters, for the restoration of piecewise uniform images. (5) We analyze the implementation of the algorithms that we develop in non-conventional hardware, such as massively parallel digital machines, and analog and hybrid networks.