955 resultados para Rayleigh-Ritz theorem
Resumo:
There has been a growing concern about the use of fossil fuels and its adverse effects on the atmospheric greenhouse and ecological environment. A reduction in the release rate of CO2 into the atmosphere poses a major challenge to the land ecology of China. The most promising way of achieving CO2 reduction is to dispose of CO2 in deep saline aquifers. Deep aquifers have a large potential for CO2 sequestration in geological medium in terms of volume and duration. Through the numerical simulation of multiphase flow in a porous media, the transformation and motion of CO2 in saline aquifers has been implemented under various temperature and hydrostatic pressure conditions, which plays an important role to the assessment of the reliability and safety of CO2 geological storage. As expected, the calculated results can provide meaningful and scientific information for management purposes. The key problem to the numerical simulation of multiphase flow in a porous media is to accurately capture the mass interface and to deal with the geological heterogeneity. In this study, the updated CE/SE (Space and time conservation element and solution element) method has been proposed, and the Hybrid Particle Level Set method (HPLS) has extended for multiphase flows in porous medium, which can accurately trace the transformation of the mass interface. The benchmark problems have been applied to evaluate and validate the proposed method. In this study, the reliability of CO2 storage in saline aquifers in Daqingzi oil field in Sunlong basin has been discussed. The simulation code developed in this study takes into account the state for CO2 covering the triple point temperature and pressure to the supercritical region. The geological heterogeneity has been implemented, using the well known geostatistical model (GSLIB) on the base of the hard data. The 2D and 3D model have been set up to simulate the CO2 multiphase flow in the porous saline aquifer, applying the CE/SE method and the HPLS method .The main contents and results are summarized as followings. (1) The 2D CE/SE method with first and second –order accuracy has been extended to simulate the multiphase flow in porous medium, which takes into account the contribution of source and sink in the momentum equation. The 3D CE/SE method with the first accuracy has been deduced. The accuracy and efficiency of the proposed CE/SE method have been investigated, using the benchmark problems. (2) The hybrid particle level set method has been made appropriate and extended for capturing the mass interface of multiphase flows in porous media, and the numerical method for level set function calculated has been formulated. (3) The closed equations for multiphase flow in porous medium has been developed, adept to both the Darcy flow and non-Darcy flow, getting over the limitation of Reynolds number to the calculation. It is found that Darcy number has a decisive influence on pressure as well as velocity given the Darcy number. (4) The new Euler scheme for numerical simulations of multiphase flows in porous medium has been proposed, which is efficient and can accurately capture the mass interface. The artificial compressibility method has been used to couple the velocities and pressure. It is found that the Darcy number has determinant effects on the numerical convergence and stability. In terms of the different Darcy numbers, the coefficient of artificial compressibility and the time step have been obtained. (5) The time scale of the critical instability for critical CO2 in the saline aquifer has been found, which is comparable with that of completely CO2 dissolved saline aquifer. (6) The concept model for CO2 multiphase flows in the saline aquifer has been configured, based on the temperature, pressure, porosity as well as permeability of the field site .Numerical simulation of CO2 hydrodynamic trapping in saline aquifers has been performed, applying the proposed CE/SE method. The state for CO2 has been employed to take into account realistic reservoir conditions for CO2 geological sequestration. The geological heterogeneity has been sufficiently treated , using the geostatistical model. (7) It is found that the Rayleigh-Taylor instability phenomenon, which is associated with the penetration of saline fluid into CO2 fluid in the direction of gravity, has been observed in CO2 multiphase flows in the saline aquifer. Development of a mushroom-type spike is a strong indication of the formation of Kelvin-Helmholtz instability due to the developed short wavelength perturbations present along the interface and parallel to the bulk flow. Additional key findings: the geological heterogeneity can distort the flow convection. The ascending of CO2 can induce the persistent flow cycling effects. The results show that boundary conditions of the field site have determinant effects on the transformation and motion of CO2 in saline aquifers. It is confirmed that the proposed method and numerical model has the reliability to simulate the process of the hydrodynamic trapping, which is the controlling mechanism for the initial period of CO2 storage at time scale of 100 years.
Resumo:
Surface wave propagation in the anisotropic media and S-wave splitting in China mainland are focused in this M.S. dissertation. We firstly introduced Anderson parameters in the research of surface wave propagation in the anisotropic media were deduced, respectively. By applying the given initial model to the forward calculation of Love wave, we compared dispersion curves of Love wave in the anisotropic media with the one in the isotropic media. the results show that, although the two kind of results are similar with each other, the effect of anisotropy can not be neglected. Furthermore, the variation of anisotropy factors will result in the variation of dispersion curves, especially for high-mode one. The method of grid dispersion inversion was then described for further tectonic inversion. We also deduced inversion equation on the condition that the layered media is anisotropic, and calculated the phase-velocity partial derivatives with respect to the model parameters, P- and S-wave velocities, density, anisotropic parameters for Rayleigh wave and Love wave. Having analyzed the results of phase-velocity partial derivatives, we concluded that the derivatives within each period decreased with the depth increasing, the phase-velocity of surface wave is sensitive to the S-wave velocities and anisotropic factors and is not sensitive to the densities of layers. Dispersion data of Love wave from the events occurred during the period from 1991 to 1998 around the Qinghai and Tibet Plateau, which magnitudes are more than 5.5, have been used in the grid dispersion inversion. Those data have been preprocessed and analyzed in the F-T domain. Then the results of 1°*1° grid dispersion inversion, the pure path dispersion data, in the area of Qianghai and Tibet Plateau were obtained. As an example, dispersion data have been input for the tectonic inversion in the anisotropic media, and the results of anisotropic factors under the region of Qianghai and Tibet Plateau were initially discussed. As for the other part of this dissertation. We first introduced the phenomena of S-wave splitting and the methods for calculation the splitting parameters. Then, We applied Butterworth band-pass filter to S-wave data recorded at 8 stations in China mainland, and analyzed S-wave splitting at different frequency bands. The results show the delay time and the fast polarization directions of S-wave splitting depend upon the frequency bands. There is an absence of S-wave splitting at the station of Wulumuqi (WMQ) for the band of 0.1-0.2Hz. With the frequency band broaden, the delay time of S-wave splitting decreases at the stations of Beijing (BJI), Enshi (ENH), Kunming (KMI) and Mudanjiang (MDJ); the fast polarization direction at Enshi (ENH) changes from westward to eastward, and eastward to westward at Hailaer (HIA). The variations of delay time with bands at Lanzhou (LZH) and qiongzhong (QIZ) are similar, and there is a coherent trend of fast polarization directions at BJI, KMI and MDJ respectively. Initial interpretations to the results of frequency band-dependence of S-wave splitting were also presented.
Resumo:
The dissertation addressed the problems of signals reconstruction and data restoration in seismic data processing, which takes the representation methods of signal as the main clue, and take the seismic information reconstruction (signals separation and trace interpolation) as the core. On the natural bases signal representation, I present the ICA fundamentals, algorithms and its original applications to nature earth quake signals separation and survey seismic signals separation. On determinative bases signal representation, the paper proposed seismic dada reconstruction least square inversion regularization methods, sparseness constraints, pre-conditioned conjugate gradient methods, and their applications to seismic de-convolution, Radon transformation, et. al. The core contents are about de-alias uneven seismic data reconstruction algorithm and its application to seismic interpolation. Although the dissertation discussed two cases of signal representation, they can be integrated into one frame, because they both deal with the signals or information restoration, the former reconstructing original signals from mixed signals, the later reconstructing whole data from sparse or irregular data. The goal of them is same to provide pre-processing methods and post-processing method for seismic pre-stack depth migration. ICA can separate the original signals from mixed signals by them, or abstract the basic structure from analyzed data. I surveyed the fundamental, algorithms and applications of ICA. Compared with KL transformation, I proposed the independent components transformation concept (ICT). On basis of the ne-entropy measurement of independence, I implemented the FastICA and improved it by covariance matrix. By analyzing the characteristics of the seismic signals, I introduced ICA into seismic signal processing firstly in Geophysical community, and implemented the noise separation from seismic signal. Synthetic and real data examples show the usability of ICA to seismic signal processing and initial effects are achieved. The application of ICA to separation quake conversion wave from multiple in sedimentary area is made, which demonstrates good effects, so more reasonable interpretation of underground un-continuity is got. The results show the perspective of application of ICA to Geophysical signal processing. By virtue of the relationship between ICA and Blind Deconvolution , I surveyed the seismic blind deconvolution, and discussed the perspective of applying ICA to seismic blind deconvolution with two possible solutions. The relationship of PC A, ICA and wavelet transform is claimed. It is proved that reconstruction of wavelet prototype functions is Lie group representation. By the way, over-sampled wavelet transform is proposed to enhance the seismic data resolution, which is validated by numerical examples. The key of pre-stack depth migration is the regularization of pre-stack seismic data. As a main procedure, seismic interpolation and missing data reconstruction are necessary. Firstly, I review the seismic imaging methods in order to argue the critical effect of regularization. By review of the seismic interpolation algorithms, I acclaim that de-alias uneven data reconstruction is still a challenge. The fundamental of seismic reconstruction is discussed firstly. Then sparseness constraint on least square inversion and preconditioned conjugate gradient solver are studied and implemented. Choosing constraint item with Cauchy distribution, I programmed PCG algorithm and implement sparse seismic deconvolution, high resolution Radon Transformation by PCG, which is prepared for seismic data reconstruction. About seismic interpolation, dealias even data interpolation and uneven data reconstruction are very good respectively, however they can not be combined each other. In this paper, a novel Fourier transform based method and a algorithm have been proposed, which could reconstruct both uneven and alias seismic data. I formulated band-limited data reconstruction as minimum norm least squares inversion problem where an adaptive DFT-weighted norm regularization term is used. The inverse problem is solved by pre-conditional conjugate gradient method, which makes the solutions stable and convergent quickly. Based on the assumption that seismic data are consisted of finite linear events, from sampling theorem, alias events can be attenuated via LS weight predicted linearly from low frequency. Three application issues are discussed on even gap trace interpolation, uneven gap filling, high frequency trace reconstruction from low frequency data trace constrained by few high frequency traces. Both synthetic and real data numerical examples show the proposed method is valid, efficient and applicable. The research is valuable to seismic data regularization and cross well seismic. To meet 3D shot profile depth migration request for data, schemes must be taken to make the data even and fitting the velocity dataset. The methods of this paper are used to interpolate and extrapolate the shot gathers instead of simply embedding zero traces. So, the aperture of migration is enlarged and the migration effect is improved. The results show the effectiveness and the practicability.
Resumo:
Formation resistivity is one of the most important parameters to be evaluated in the evaluation of reservoir. In order to acquire the true value of virginal formation, various types of resistivity logging tools have been developed. However, with the increment of the proved reserves, the thickness of interest pay zone is becoming thinner and thinner, especially in the terrestrial deposit oilfield, so that electrical logging tools, limited by the contradictory requirements of resolution and investigation depth of this kinds of tools, can not provide the true value of the formation resistivity. Therefore, resitivity inversion techniques have been popular in the determination of true formation resistivity based on the improving logging data from new tools. In geophysical inverse problems, non-unique solution is inevitable due to the noisy data and deficient measurement information. I address this problem in my dissertation from three aspects, data acquisition, data processing/inversion and applications of the results/ uncertainty evaluation of the non-unique solution. Some other problems in the traditional inversion methods such as slowness speed of the convergence and the initial-correlation results. Firstly, I deal with the uncertainties in the data to be processed. The combination of micro-spherically focused log (MSFL) and dual laterolog(DLL) is the standard program to determine formation resistivity. During the inversion, the readings of MSFL are regarded as the resistivity of invasion zone of the formation after being corrected. However, the errors can be as large as 30 percent due to mud cake influence even if the rugose borehole effects on the readings of MSFL can be ignored. Furthermore, there still are argues about whether the two logs can be quantitatively used to determine formation resisitivities due to the different measurement principles. Thus, anew type of laterolog tool is designed theoretically. The new tool can provide three curves with different investigation depths and the nearly same resolution. The resolution is about 0.4meter. Secondly, because the popular iterative inversion method based on the least-square estimation can not solve problems more than two parameters simultaneously and the new laterolog logging tool is not applied to practice, my work is focused on two parameters inversion (radius of the invasion and the resistivty of virgin information ) of traditional dual laterolog logging data. An unequal weighted damp factors- revised method is developed to instead of the parameter-revised techniques used in the traditional inversion method. In this new method, the parameter is revised not only dependency on the damp its self but also dependency on the difference between the measurement data and the fitting data in different layers. At least 2 iterative numbers are reduced than the older method, the computation cost of inversion is reduced. The damp least-squares inversion method is the realization of Tikhonov's tradeoff theory on the smooth solution and stability of inversion process. This method is realized through linearity of non-linear inversion problem which must lead to the dependency of solution on the initial value of parameters. Thus, severe debates on efficiency of this kinds of methods are getting popular with the developments of non-linear processing methods. The artificial neural net method is proposed in this dissertation. The database of tool's response to formation parameters is built through the modeling of the laterolog tool and then is used to training the neural nets. A unit model is put forward to simplify the dada space and an additional physical limitation is applied to optimize the net after the cross-validation method is done. Results show that the neural net inversion method could replace the traditional inversion method in a single formation and can be used a method to determine the initial value of the traditional method. No matter what method is developed, the non-uniqueness and uncertainties of the solution could be inevitable. Thus, it is wise to evaluate the non-uniqueness and uncertainties of the solution in the application of inversion results. Bayes theorem provides a way to solve such problems. This method is illustrately discussed in a single formation and achieve plausible results. In the end, the traditional least squares inversion method is used to process raw logging data, the calculated oil saturation increased 20 percent than that not be proceed compared to core analysis.
Resumo:
Learning an input-output mapping from a set of examples, of the type that many neural networks have been constructed to perform, can be regarded as synthesizing an approximation of a multi-dimensional function, that is solving the problem of hypersurface reconstruction. From this point of view, this form of learning is closely related to classical approximation techniques, such as generalized splines and regularization theory. This paper considers the problems of an exact representation and, in more detail, of the approximation of linear and nolinear mappings in terms of simpler functions of fewer variables. Kolmogorov's theorem concerning the representation of functions of several variables in terms of functions of one variable turns out to be almost irrelevant in the context of networks for learning. We develop a theoretical framework for approximation based on regularization techniques that leads to a class of three-layer networks that we call Generalized Radial Basis Functions (GRBF), since they are mathematically related to the well-known Radial Basis Functions, mainly used for strict interpolation tasks. GRBF networks are not only equivalent to generalized splines, but are also closely related to pattern recognition methods such as Parzen windows and potential functions and to several neural network algorithms, such as Kanerva's associative memory, backpropagation and Kohonen's topology preserving map. They also have an interesting interpretation in terms of prototypes that are synthesized and optimally combined during the learning stage. The paper introduces several extensions and applications of the technique and discusses intriguing analogies with neurobiological data.
Resumo:
A procedure is given for recognizing sets of inference rules that generate polynomial time decidable inference relations. The procedure can automatically recognize the tractability of the inference rules underlying congruence closure. The recognition of tractability for that particular rule set constitutes mechanical verification of a theorem originally proved independently by Kozen and Shostak. The procedure is algorithmic, rather than heuristic, and the class of automatically recognizable tractable rule sets can be precisely characterized. A series of examples of rule sets whose tractability is non-trivial, yet machine recognizable, is also given. The technical framework developed here is viewed as a first step toward a general theory of tractable inference relations.
Resumo:
This research is concerned with designing representations for analytical reasoning problems (of the sort found on the GRE and LSAT). These problems test the ability to draw logical conclusions. A computer program was developed that takes as input a straightforward predicate calculus translation of a problem, requests additional information if necessary, decides what to represent and how, designs representations capturing the constraints of the problem, and creates and executes a LISP program that uses those representations to produce a solution. Even though these problems are typically difficult for theorem provers to solve, the LISP program that uses the designed representations is very efficient.
Resumo:
One very useful idea in AI research has been the notion of an explicit model of a problem situation. Procedural deduction languages, such as PLANNER, have been valuable tools for building these models. But PLANNER and its relatives are very limited in their ability to describe situations which are only partially specified. This thesis explores methods of increasing the ability of procedural deduction systems to deal with incomplete knowledge. The thesis examines in detail, problems involving negation, implication, disjunction, quantification, and equality. Control structure issues and the problem of modelling change under incomplete knowledge are also considered. Extensive comparisons are also made with systems for mechanica theorem proving.
Resumo:
The constraint paradigm is a model of computation in which values are deduced whenever possible, under the limitation that deductions be local in a certain sense. One may visualize a constraint 'program' as a network of devices connected by wires. Data values may flow along the wires, and computation is performed by the devices. A device computes using only locally available information (with a few exceptions), and places newly derived values on other, locally attached wires. In this way computed values are propagated. An advantage of the constraint paradigm (not unique to it) is that a single relationship can be used in more than one direction. The connections to a device are not labelled as inputs and outputs; a device will compute with whatever values are available, and produce as many new values as it can. General theorem provers are capable of such behavior, but tend to suffer from combinatorial explosion; it is not usually useful to derive all the possible consequences of a set of hypotheses. The constraint paradigm places a certain kind of limitation on the deduction process. The limitations imposed by the constraint paradigm are not the only one possible. It is argued, however, that they are restrictive enough to forestall combinatorial explosion in many interesting computational situations, yet permissive enough to allow useful computations in practical situations. Moreover, the paradigm is intuitive: It is easy to visualize the computational effects of these particular limitations, and the paradigm is a natural way of expressing programs for certain applications, in particular relationships arising in computer-aided design. A number of implementations of constraint-based programming languages are presented. A progression of ever more powerful languages is described, complete implementations are presented and design difficulties and alternatives are discussed. The goal approached, though not quite reached, is a complete programming system which will implicitly support the constraint paradigm to the same extent that LISP, say, supports automatic storage management.
Resumo:
The influence of laser-field parameters, such as intensity and pulse width, on the population of molecular excited state is investigated by using the time-dependent wavepacket method. For a two-state system in intense laser fields, the populations in the upper and lower states are given by the wavefunctions obtained by solving the Schrodinger equation through split-operator scheme. The calculation shows that both the laser intensity and the pulse width have a strong effect on the population in molecular excited state, and that as the common feature of light-matter interaction (LMI), the periodic changing of the population with the evolution time in each state can be interpreted by Rabi oscillation and area-theorem. The results illustrate that by controlling these two parameters, the needed population in excited state of interest can be obtained, which provides the foundation of light manipulation of molecular processes. (C) 2005 Elsevier B.V. All rights reserved.
Resumo:
The digital divide has been, at least until very recently, a major theme in policy as well as interdisciplinary academic circles across the world, as well as at a collective global level, as attested by the World Summit on the Information Society. Numerous research papers and volumes have attempted to conceptualise the digital divide and to offer reasoned prescriptive and normative responses. What has been lacking in many of these studies, it is submitted, is a rigorous negotiation of moral and political philosophy, the result being a failure to situate the digital divide - or rather, more widely, information imbalances - in a holistic understanding of social structures of power and wealth. In practice, prescriptive offerings have been little more than philanthropic in tendency, whether private or corporate philanthropy. Instead, a theory of distributive justice is required, one that recovers the tradition of emancipatory, democratic struggle. This much has been said before. What is new here, however, is that the paper suggests a specific formula, the Rawls-Tawney theorem, as a solution at the level of analytical moral-political philosophy. Building on the work of John Rawls and R. H. Tawney, this avoids both the Charybdis of Marxism and the Scylla of liberalism. It delineates some of the details of the meaning of social justice in the information age. Promulgating a conception of isonomia, which while egalitarian eschews arithmetic equality (the equality of misery), the paper hopes to contribute to the emerging ideal of communicative justice in the media-saturated, post-industrial epoch.
Resumo:
Ridoux, O. and Ferr?, S. (2004) Introduction to logical information systems. Information Processing & Management, 40 (3), 383-419. Elsevier
Resumo:
Shen, Q., Zhao, R., Tang, W. (2008). Modelling random fuzzy renewal reward processes. IEEE Transactions on Fuzzy Systems, 16 (5),1379-1385
Resumo:
Douglas, Robert; Cullen, M.J.P.; Roulston, I.; Sewell, M.J., (2005) 'Generalized semi-geostrophic theory on a sphere', Journal of Fluid Mechanics 531 pp.123-157 RAE2008
Resumo:
Cox, Simon; Weaire, D.; F?tima Vaz, M., (2002) 'The transition from two-dimensional to three-dimensional foam structures', The European Physical Journal E - Soft Matter 7(4) pp.311-315 RAE2008