989 resultados para Korovkin theorem


Relevância:

10.00% 10.00%

Publicador:

Resumo:

介电泳方法被广泛地应用于微纳颗粒的分离和操纵中,实现介电泳操作的关键是设计满足所需电场分布的电极阵列。针对目前在微电极阵列设计中尚缺乏简单有效的电场解析方法的现状,提出一种基于格林公式的电极阵列电场的解析方法。首先介绍了传统介电泳和行波介电泳的概念和计算模型,分析了介电泳过程与电极上所施加的交变电压的频率和幅度的关系,然后在确立电极电势的边界条件的基础上,采用基于格林公式的电场解析方法,建立了非均匀电场的解析模型,得出不同条件下的电极阵列电场分布的仿真结果,最后利用FEMLAB有限元仿真软件对解析模型进行了对比仿真,验证了该解析模型的可行性。基于格林公式的电场解析求解方法能够有效地提高电极阵列设计中的针对性以及缩短电极设计的时间。

Relevância:

10.00% 10.00%

Publicador:

Resumo:

本文采用集中预规划方法 ,通过调整机器人的运动速度实现多机器人避碰 ,所提算法的基本思想为 :将机器人的运动路径分段 ,然后按避碰要求对机器人通过各段的时间进行约束 ,从而将避碰问题转化为高维线性空间的优化问题 ,并进一步将其转化为线性方程的求解 ,使问题具有明确的解析解 .由于该方法的复杂度较高 ,在实现过程中采用了多种方法降低复杂度 ,简化计算 .本文给出了该算法的基本思路 ,有关定理及证明 ,算法的化简方法 ,最后给出了实验结果及分析 .

Relevância:

10.00% 10.00%

Publicador:

Resumo:

本文在分析简单遗传算法 (Simple Genetic Algorithm,SGA)的基础上 ,提出了一种新型结构的两代竞争遗传算法 ,并给出了算法演进的模式定理 .通过理论分析和对 TSP(TravelSalesman Problem,TSP)问题的应用研究 ,表明了该算法具有搜索效率高、鲁棒性强的特点

Relevância:

10.00% 10.00%

Publicador:

Resumo:

本文论述 2 0世纪运动稳定性理论研究的三个重要结果 :李雅普诺夫函数、谢聂稳定判据、卡利托洛夫定理 .这三个结果都是对一般的连续系统作出的 ,结论明确 ,简单实用 ,因而具有广泛的应用性

Relevância:

10.00% 10.00%

Publicador:

Resumo:

本文提出了步行机器人运动控制算法。该方法以相对运动学原理为基础,把机体的运动规划问题转化为腿的足端轨迹规划问题,从而使步行机器人运动控制问题得到大大简化.并应用该方法对全方位三角步态算法及稳定性进行分析求解.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the past decade density functional theory (DFT) has made its way from a peripheral position in quantum chemistry to center. Of course the often excellent accuracy of the DFT based methods has provided the primary driving force of this development. This dissertation is devoted to the study of physical and chemical properties of planetary materials by first-principle calculation. The concerned properties include the geometry, elastic constants and anisotropy. In the first chapter, we give a systematic introduction to theoretical background and review its progress. Development of quantum chemistry promotes the establishment of DFT. Theorem of Hohenberg-Kohn is the fundament of DFT and is developed to Kohn-Sham equation, which can be used to perform real calculations. Now, new corrections and extensions, together with developed exchange-correlation, have made DFT more accurate and suitable for larger systems. In the second chapter, we focus on the calculational methods and technical aspects of DFT. Although it is important to develop methods and program, external package are still often used. At the end of this chapter, we briefly some widely used simulation package and the application of DFT. In the third chapter, we begin to focus on properties of real materials by first principles calculation. We study a kind of minerals named Ca perovskite, investigate its possible structure and anisotropy at Earth’s mental condition. By understanding and predicting geo-physically important materials properties at extreme conditions, we can get the most accurate information to interpret seismic data in the context of likely geophysical processes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

As active electromagnetic method, field data of CSAMT method follow the equation of diffusion. Propagting in solid earth media, diffusion EM signal has strong attenuation and dispersion, otherwise seismic wave shows weak attenuation and dispersion, therefore the resolution power of CSAMT method is not better than seismic reflection method. However, there is consistence and similarity between EM signal and seismic wave in wave equation, we can apply Kirchhoff integral migration technique, a proven one in seismic method in time domain, to carry out seduo-seismic processing for CSAMT signal in frequency domain so that the attenuation and dispersion could be made compensated in some extent, and the resolution power and interpretation precision of active EM wave could be improved. Satisfying passive homogeneous Helmholtz quation, we proceed with Green theorem and combine the active inhomogenous Helmholtz quation, the Kirchhoff integral formula could be derived. Given practical problems, if we only consider the surface integral value, and assume that the intergral value in other interface is zero, combined with Green theorem in uniform half space, the expression could be simplified, and we can obtain frequency-domain Kirchhoff integral formula in surface, which is also called downward continuation of EM field in frequency domain. With image conditions and energy compensation considered, in order to get image conditions in time domain Fourier inverse transformation in frequency domain can be performed, so we can formulate the active Kirchhoff integral migration expression. At first, we construct relative stratified model, with different frequency series taken into account, then we change the distances between transmitter and reciever, the EM response can be obtained. Analyzing the EM properties, we can clarify near and far zone that can instruct us to carry out transmitter layout in practical application. Combined with field data surveyed in far zone, We perform Kirchhoff integral migration and compare the results with model to interpret. Secondly, with far field EM data, we apply TM mode to get EM response of given 2D model, then apply Kirchhoff integral migration on modelling data and interpret the results.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Theory of limit analysis include upper bound theorem and lower bound theorem. To deal with slope stability analysis by limit analysis is to approximate the real solution from upper limit and lower limit. The most used method of limit analysis is upper bound theorem, therefore it is often applied to slope engineering in many cases. Although upper bound approach of limit analysis can keep away from vague constitutive relation and complex stress analyses, it also can obtain rigorous result. Assuming the critical surface is circular slip surface, two kinematically admissible velocity fields for perpendicular slice method and radial slice method can be established according to the limit analysis of upper bound theorem. By means of virtual work rate equation and strength reduction method, the upper-bound solution of limit analysis for homogeneous soil slope can be obtained. A log-spiral rotational failure mechanism for homogeneous slope is discussed from two different conditions which represent the position of shear crack passing the toe and below the toe. In the dissertition, the author also establishes a rotational failure mechanics with combination of different logarithmic spiral arcs. Furthermore, the calculation formula of upper bound solution for inhomogeneous soil slope stability problem can be deduced based on the upper bound approach of rigid elements. Through calculating the external work rate caused by soil nail, anti-slide pile, geotechnological grid and retaining wall, the upper bound solution of safety factor of soil nail structure slope, slip resistance of anti-slide pile, critical height of reinforced soil slope and active earth pressure of retaining wall can be obtained by upper bound limit analysis method. Taking accumulated body slope as subject investigated, with study on the limit analysis method to calculate slope safety factor, the kinematically admissible velocity fields of perpendicular slice method for slope with broken slip surface is proposed. Through calculating not only the energy dissipation rate produced in the broken slip surfaces and the vertical velocity discontinuity, but also the work rate produced by self-weight and external load, the upper bound solution of slope with broken slip surface is deduced. As a case study, the slope stability of the Sanmashan landslide in the area of the Three Gorges reservoir is analyzed. Based on the theory of limit analysis, the upper bound solution for rock slope with planar failure surface is obtained. By means of virtual work-rate equation, energy dissipation caused by dislocation of thin-layer and terrane can be calculated; furthermore, the formulas of safety factor for upper bound approach of limit analysis can be deduced. In the end, a new computational model of stability analysis for anchored rock slope is presented after taking into consideration the supporting effect of rock-bolts, the action of seismic force and fissure water pressure. By using the model, not only the external woke-rate done by self-weight, seismic force, fissure water pressure and anchorage force but also the internal energy dissipation produced in the slip surface and structural planes can be totally calculated. According to the condition of virtual work rate equation in limit state, the formula of safety factor for upper bound limit analysis can be deduced.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The dissertation addressed the problems of signals reconstruction and data restoration in seismic data processing, which takes the representation methods of signal as the main clue, and take the seismic information reconstruction (signals separation and trace interpolation) as the core. On the natural bases signal representation, I present the ICA fundamentals, algorithms and its original applications to nature earth quake signals separation and survey seismic signals separation. On determinative bases signal representation, the paper proposed seismic dada reconstruction least square inversion regularization methods, sparseness constraints, pre-conditioned conjugate gradient methods, and their applications to seismic de-convolution, Radon transformation, et. al. The core contents are about de-alias uneven seismic data reconstruction algorithm and its application to seismic interpolation. Although the dissertation discussed two cases of signal representation, they can be integrated into one frame, because they both deal with the signals or information restoration, the former reconstructing original signals from mixed signals, the later reconstructing whole data from sparse or irregular data. The goal of them is same to provide pre-processing methods and post-processing method for seismic pre-stack depth migration. ICA can separate the original signals from mixed signals by them, or abstract the basic structure from analyzed data. I surveyed the fundamental, algorithms and applications of ICA. Compared with KL transformation, I proposed the independent components transformation concept (ICT). On basis of the ne-entropy measurement of independence, I implemented the FastICA and improved it by covariance matrix. By analyzing the characteristics of the seismic signals, I introduced ICA into seismic signal processing firstly in Geophysical community, and implemented the noise separation from seismic signal. Synthetic and real data examples show the usability of ICA to seismic signal processing and initial effects are achieved. The application of ICA to separation quake conversion wave from multiple in sedimentary area is made, which demonstrates good effects, so more reasonable interpretation of underground un-continuity is got. The results show the perspective of application of ICA to Geophysical signal processing. By virtue of the relationship between ICA and Blind Deconvolution , I surveyed the seismic blind deconvolution, and discussed the perspective of applying ICA to seismic blind deconvolution with two possible solutions. The relationship of PC A, ICA and wavelet transform is claimed. It is proved that reconstruction of wavelet prototype functions is Lie group representation. By the way, over-sampled wavelet transform is proposed to enhance the seismic data resolution, which is validated by numerical examples. The key of pre-stack depth migration is the regularization of pre-stack seismic data. As a main procedure, seismic interpolation and missing data reconstruction are necessary. Firstly, I review the seismic imaging methods in order to argue the critical effect of regularization. By review of the seismic interpolation algorithms, I acclaim that de-alias uneven data reconstruction is still a challenge. The fundamental of seismic reconstruction is discussed firstly. Then sparseness constraint on least square inversion and preconditioned conjugate gradient solver are studied and implemented. Choosing constraint item with Cauchy distribution, I programmed PCG algorithm and implement sparse seismic deconvolution, high resolution Radon Transformation by PCG, which is prepared for seismic data reconstruction. About seismic interpolation, dealias even data interpolation and uneven data reconstruction are very good respectively, however they can not be combined each other. In this paper, a novel Fourier transform based method and a algorithm have been proposed, which could reconstruct both uneven and alias seismic data. I formulated band-limited data reconstruction as minimum norm least squares inversion problem where an adaptive DFT-weighted norm regularization term is used. The inverse problem is solved by pre-conditional conjugate gradient method, which makes the solutions stable and convergent quickly. Based on the assumption that seismic data are consisted of finite linear events, from sampling theorem, alias events can be attenuated via LS weight predicted linearly from low frequency. Three application issues are discussed on even gap trace interpolation, uneven gap filling, high frequency trace reconstruction from low frequency data trace constrained by few high frequency traces. Both synthetic and real data numerical examples show the proposed method is valid, efficient and applicable. The research is valuable to seismic data regularization and cross well seismic. To meet 3D shot profile depth migration request for data, schemes must be taken to make the data even and fitting the velocity dataset. The methods of this paper are used to interpolate and extrapolate the shot gathers instead of simply embedding zero traces. So, the aperture of migration is enlarged and the migration effect is improved. The results show the effectiveness and the practicability.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Formation resistivity is one of the most important parameters to be evaluated in the evaluation of reservoir. In order to acquire the true value of virginal formation, various types of resistivity logging tools have been developed. However, with the increment of the proved reserves, the thickness of interest pay zone is becoming thinner and thinner, especially in the terrestrial deposit oilfield, so that electrical logging tools, limited by the contradictory requirements of resolution and investigation depth of this kinds of tools, can not provide the true value of the formation resistivity. Therefore, resitivity inversion techniques have been popular in the determination of true formation resistivity based on the improving logging data from new tools. In geophysical inverse problems, non-unique solution is inevitable due to the noisy data and deficient measurement information. I address this problem in my dissertation from three aspects, data acquisition, data processing/inversion and applications of the results/ uncertainty evaluation of the non-unique solution. Some other problems in the traditional inversion methods such as slowness speed of the convergence and the initial-correlation results. Firstly, I deal with the uncertainties in the data to be processed. The combination of micro-spherically focused log (MSFL) and dual laterolog(DLL) is the standard program to determine formation resistivity. During the inversion, the readings of MSFL are regarded as the resistivity of invasion zone of the formation after being corrected. However, the errors can be as large as 30 percent due to mud cake influence even if the rugose borehole effects on the readings of MSFL can be ignored. Furthermore, there still are argues about whether the two logs can be quantitatively used to determine formation resisitivities due to the different measurement principles. Thus, anew type of laterolog tool is designed theoretically. The new tool can provide three curves with different investigation depths and the nearly same resolution. The resolution is about 0.4meter. Secondly, because the popular iterative inversion method based on the least-square estimation can not solve problems more than two parameters simultaneously and the new laterolog logging tool is not applied to practice, my work is focused on two parameters inversion (radius of the invasion and the resistivty of virgin information ) of traditional dual laterolog logging data. An unequal weighted damp factors- revised method is developed to instead of the parameter-revised techniques used in the traditional inversion method. In this new method, the parameter is revised not only dependency on the damp its self but also dependency on the difference between the measurement data and the fitting data in different layers. At least 2 iterative numbers are reduced than the older method, the computation cost of inversion is reduced. The damp least-squares inversion method is the realization of Tikhonov's tradeoff theory on the smooth solution and stability of inversion process. This method is realized through linearity of non-linear inversion problem which must lead to the dependency of solution on the initial value of parameters. Thus, severe debates on efficiency of this kinds of methods are getting popular with the developments of non-linear processing methods. The artificial neural net method is proposed in this dissertation. The database of tool's response to formation parameters is built through the modeling of the laterolog tool and then is used to training the neural nets. A unit model is put forward to simplify the dada space and an additional physical limitation is applied to optimize the net after the cross-validation method is done. Results show that the neural net inversion method could replace the traditional inversion method in a single formation and can be used a method to determine the initial value of the traditional method. No matter what method is developed, the non-uniqueness and uncertainties of the solution could be inevitable. Thus, it is wise to evaluate the non-uniqueness and uncertainties of the solution in the application of inversion results. Bayes theorem provides a way to solve such problems. This method is illustrately discussed in a single formation and achieve plausible results. In the end, the traditional least squares inversion method is used to process raw logging data, the calculated oil saturation increased 20 percent than that not be proceed compared to core analysis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Learning an input-output mapping from a set of examples, of the type that many neural networks have been constructed to perform, can be regarded as synthesizing an approximation of a multi-dimensional function, that is solving the problem of hypersurface reconstruction. From this point of view, this form of learning is closely related to classical approximation techniques, such as generalized splines and regularization theory. This paper considers the problems of an exact representation and, in more detail, of the approximation of linear and nolinear mappings in terms of simpler functions of fewer variables. Kolmogorov's theorem concerning the representation of functions of several variables in terms of functions of one variable turns out to be almost irrelevant in the context of networks for learning. We develop a theoretical framework for approximation based on regularization techniques that leads to a class of three-layer networks that we call Generalized Radial Basis Functions (GRBF), since they are mathematically related to the well-known Radial Basis Functions, mainly used for strict interpolation tasks. GRBF networks are not only equivalent to generalized splines, but are also closely related to pattern recognition methods such as Parzen windows and potential functions and to several neural network algorithms, such as Kanerva's associative memory, backpropagation and Kohonen's topology preserving map. They also have an interesting interpretation in terms of prototypes that are synthesized and optimally combined during the learning stage. The paper introduces several extensions and applications of the technique and discusses intriguing analogies with neurobiological data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A procedure is given for recognizing sets of inference rules that generate polynomial time decidable inference relations. The procedure can automatically recognize the tractability of the inference rules underlying congruence closure. The recognition of tractability for that particular rule set constitutes mechanical verification of a theorem originally proved independently by Kozen and Shostak. The procedure is algorithmic, rather than heuristic, and the class of automatically recognizable tractable rule sets can be precisely characterized. A series of examples of rule sets whose tractability is non-trivial, yet machine recognizable, is also given. The technical framework developed here is viewed as a first step toward a general theory of tractable inference relations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This research is concerned with designing representations for analytical reasoning problems (of the sort found on the GRE and LSAT). These problems test the ability to draw logical conclusions. A computer program was developed that takes as input a straightforward predicate calculus translation of a problem, requests additional information if necessary, decides what to represent and how, designs representations capturing the constraints of the problem, and creates and executes a LISP program that uses those representations to produce a solution. Even though these problems are typically difficult for theorem provers to solve, the LISP program that uses the designed representations is very efficient.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

One very useful idea in AI research has been the notion of an explicit model of a problem situation. Procedural deduction languages, such as PLANNER, have been valuable tools for building these models. But PLANNER and its relatives are very limited in their ability to describe situations which are only partially specified. This thesis explores methods of increasing the ability of procedural deduction systems to deal with incomplete knowledge. The thesis examines in detail, problems involving negation, implication, disjunction, quantification, and equality. Control structure issues and the problem of modelling change under incomplete knowledge are also considered. Extensive comparisons are also made with systems for mechanica theorem proving.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The constraint paradigm is a model of computation in which values are deduced whenever possible, under the limitation that deductions be local in a certain sense. One may visualize a constraint 'program' as a network of devices connected by wires. Data values may flow along the wires, and computation is performed by the devices. A device computes using only locally available information (with a few exceptions), and places newly derived values on other, locally attached wires. In this way computed values are propagated. An advantage of the constraint paradigm (not unique to it) is that a single relationship can be used in more than one direction. The connections to a device are not labelled as inputs and outputs; a device will compute with whatever values are available, and produce as many new values as it can. General theorem provers are capable of such behavior, but tend to suffer from combinatorial explosion; it is not usually useful to derive all the possible consequences of a set of hypotheses. The constraint paradigm places a certain kind of limitation on the deduction process. The limitations imposed by the constraint paradigm are not the only one possible. It is argued, however, that they are restrictive enough to forestall combinatorial explosion in many interesting computational situations, yet permissive enough to allow useful computations in practical situations. Moreover, the paradigm is intuitive: It is easy to visualize the computational effects of these particular limitations, and the paradigm is a natural way of expressing programs for certain applications, in particular relationships arising in computer-aided design. A number of implementations of constraint-based programming languages are presented. A progression of ever more powerful languages is described, complete implementations are presented and design difficulties and alternatives are discussed. The goal approached, though not quite reached, is a complete programming system which will implicitly support the constraint paradigm to the same extent that LISP, say, supports automatic storage management.