926 resultados para Laplace inverse transform
Resumo:
In the prediction of complex reservoir with high heterogeneities in lithologic and petrophysical properties, because of inexact data (e.g., information-overlapping, information-incomplete, and noise-contaminated) and ambiguous physical relationship, inversion results suffer from non-uniqueness, instability and uncertainty. Thus, the reservoir prediction technologies based on the linear assumptions are unsuited for these complex areas. Based on the limitations of conventional technologies, the thesis conducts a series of researches on various kernel problems such as inversions from band-limited seismic data, inversion resolution, inversion stability, and ambiguous physical relationship. The thesis combines deterministic, statistical and nonlinear theories of geophysics, and integrates geological information, rock physics, well data and seismic data to predict lithologic and petrophysical parameters. The joint inversion technology is suited for the areas with complex depositional environment and complex rock-physical relationship. Combining nonlinear multistage Robinson seismic convolution model with unconventional Caianiello neural network, the thesis implements the unification of the deterministic and statistical inversion. Through Robinson seismic convolution model and nonlinear self-affine transform, the deterministic inversion is implemented by establishing a deterministic relationship between seismic impedance and seismic responses. So, this can ensure inversion reliability. Furthermore, through multistage seismic wavelet (MSW)/seismic inverse wavelet (MSIW) and Caianiello neural network, the statistical inversion is implemented by establishing a statistical relationship between seismic impedance and seismic responses. Thus, this can ensure the anti-noise ability. In this thesis, direct and indirect inversion modes are alternately used to estimate and revise the impedance value. Direct inversion result is used as the initial value of indirect inversion and finally high-resolution impedance profile is achieved by indirect inversion. This largely enhances inversion precision. In the thesis, a nonlinear rock physics convolution model is adopted to establish a relationship between impedance and porosity/clay-content. Through multistage decomposition and bidirectional edge wavelet detection, it can depict more complex rock physical relationship. Moreover, it uses the Caianiello neural network to implement the combination of deterministic inversion, statistical inversion and nonlinear theory. Last, by combined applications of direct inversion based on vertical edge detection wavelet and indirect inversion based on lateral edge detection wavelet, it implements the integrative application of geological information, well data and seismic impedance for estimation of high-resolution petrophysical parameters (porosity/clay-content). These inversion results can be used to reservoir prediction and characterization. Multi-well constrains and separate-frequency inversion modes are adopted in the thesis. The analyses of these sections of lithologic and petrophysical properties show that the low-frequency sections reflect the macro structure of the strata, while the middle/high-frequency sections reflect the detailed structure of the strata. Therefore, the high-resolution sections can be used to recognize the boundary of sand body and to predict the hydrocarbon zones.
Resumo:
In research field of oil geophysical prospecting, reservoir prediction is refers to forecasting physical properties of petroleum reservoir by using data of seismic and well logging, it is a research which can guide oil field development. Singularities of seismic and logging data are caused by the heterogeneity of reservoir physical property. It's one of important methods that using singularity characteristics of seismic and logging data to study the reservoir physical property in recently. Among them, realization of reservoir quantitative prediction by analyzing singularity of the data and enhancing transition description of data is difficulty in method research. Based on wavelet transform and the fractal theory, the paper studied the singularity judgment criterion for seismic and logging data, not only analyzed quantitative relation between singularity data and reservoir physical property, but also applied it in practical reservoir prediction. The main achievements are: 1. A new method which provides singular points and their strength information estimation at only one single scale is proposed by Herrmann (1999). Based on that, the dissertation proposed modified algorithm which realized singularity polarity detection. 2. The dissertation introduced onset function to generalize the traditional geologic boundaries variations model which used singularity characteristics to represent the abruptness of the lithologic velocity transition. We show that singularity analysis reveals generic singularity information conducted from velocity or acoustic impedance to seismogram based on the convolution seismic-model theory. Theory and applications indicated that singularity information calculated from seismic data was a natural attribute for delineating stratigraphy boundaries due to its excellent ability in detecting detailed geologic features. We demonstrated that singularity analysis was a powerful tool to delineate stratigraphy boundaries and inverse acoustic impedance and velocity. 3. The geologic significances of logging data singularity information were also presented. According to our analysis, the positions of singularities indicate the sequence stratigraphic boundary, and there is subtle relationship between the singularity strength and sedimentary environment, meanwhile the singularity polarity used to recognize stratigraphic base-level cycle. Based on all those above, a new method which provided sedimentary cycle analysis based on the singularity information of logging data in multiple scales was proposed in this dissertation. This method provided a quantitative tool for judging interface of stratum sequence and achieved good results in the actual application.
Resumo:
The space currents definitely take effects on electromagnetic environment and also are scientific highlight in the space research. Space currents as a momentum and energy provider to Geospace Storm, disturb the varied part of geomagnetic field, distort magnetospheric configuration and furthermore take control of the coupling between magnetosphere and ionosphere. Due to both academic and commercial objectives above, we carry on geomagnetic inverse and theoretical studies about the space currents by using geomagnetic data from INTERMAGNET. At first, we apply a method of Natural Orthogonal Components (NOC) to decomposition the solar daily variation, especially for (solar quiet variation). NOC is just one of eign mode analysis, the most advantage of this method is that the basic functions (BFs) were not previously designated, but naturally came from the original data so that there are several BFs usually corresponding to the process really happened and have more physical meaning than the traditional spectrum analysis with the fixed BFs like Fourier trigonometric functions. The first two eign modes are corresponding to the and daily variation and their amplitudes both have the seasonal and day-to-day trend, that will be useful for evaluating geomagnetic activity indices. Because of the too strict constraints of orthogonality, we try to extend orthogonal contraints to the non-orthogonal ones in order to give more suitable and appropriate decomposition of the real processes when the most components did not satisfy orthogonality. We introduce a mapping matrix which can transform the real physical space to a new mathematical space, after that process, the modified components which associated with the physical processes have satisfied the orthogonality in the new mathematical space, furthermore, we can continue to use the NOC decomposition in the new mathematical space, and then all the components inversely transform back to original physical space, so that we would have finished the non-orthogonal decomposition which more generally in the real world. Secondly, geomagnetic inverse of the ring current’s topology is conducted. Configurational changes of the ring current in the magnetosphere lead to different patterns of disturbed ground field, so that the global configuration of ring current can be inferred from its geomagnetic perturbations. We took advantages of worldwide geomagnetic observatories network to investigate the disturbed geomagnetic field which produced by ring current. It was found that the ring current was not always centered at geomagnetic equator, and significantly deviated off the equator during several intense magnetic storms. The deviation owing to the tilting and latitudinal shifting of the ring current with respect to the earth’s dipole can be estimated from global geomagnetic survey. Furthermore those two configurational factors which gave a quantitative description of the ring current configuration, will be helpful to improve the Dst calibration and understand the dependence of ring current’s configuration on the plasma sheet location relative to the equator when magnetotail field warped. Thirdly, the energization and physical acceleration process of ring current during magnetic storm has been proposed. When IMF Bz component increase, the enhanced convection electric field drive the plasma injection into the inner magnetosphere. During the transport process, a dynamic heating is happened which make the particles more ‘hot’ when the injection is more deeply inward. The energy gradient along the injection path is equivalent to a kind of force, which resist the plasma more earthward injection, as a diamagnetic effect of the magnetosphere anti and repellent action to the exotically injected plasma. The acceleration efficiency has a power law form. We use analytical way to quantitatively describe the dynamical process by introducing a physical parameter: energization index, which will be useful to understand how the particle is heated. At the end, we give a scheme of how to get the from storm time geomagnetic data. During intense magnetic storms, the lognormal trend of geomagnetic Dst decreases depend on the heating dynamic of magnetosphere controlling ring current. The descending pattern of main phase is governed by the magnetospheric configuration, which can be describled by the energization index. The amplitude of Dst correlated with convection electric field or south component of the solar wind. Finally, the Dst index is predicted by upstream solar wind parameter. As we known space weather have posed many chanllenges and impacts on techinal system, the geomagnetic index for evaluating the activity space weather. We review the most popular Dst prediction method and repeat the Dst forecasting model works. A concise and convnient Key Points model of the polar region is also introduced to space weather. In summary, this paper contains some new quantitative and physical description of the space currents with special focus on the ring current. Whatever we do is just to gain a better understanding of the natural world, particularly the space environment around Earth through analytical deduction, algorithm designing and physical analysis, to quantitative interpretation. Applications of theoretical physics in conjunction with data analysis help us to understand the basic physical process govering the universe.
Resumo:
Seismic technique is in the leading position for discovering oil and gas trap and searching for reserves throughout the course of oil and gas exploration. It needs high quality of seismic processed data, not only required exact spatial position, but also the true information of amplitude and AVO attribute and velocity. Acquisition footprint has an impact on highly precision and best quality of imaging and analysis of AVO attribute and velocity. Acquisition footprint is a new conception of describing seismic noise in 3-D exploration. It is not easy to understand the acquisition footprint. This paper begins with forward modeling seismic data from the simple sound wave model, then processes it and discusses the cause for producing the acquisition footprint. It agreed that the recording geometry is the main cause which leads to the distribution asymmetry of coverage and offset and azimuth in different grid cells. It summarizes the characters and description methods and analysis acquisition footprint’s influence on data geology interpretation and the analysis of seismic attribute and velocity. The data reconstruct based on Fourier transform is the main method at present for non uniform data interpolation and extrapolate, but this method always is an inverse problem with bad condition. Tikhonov regularization strategy which includes a priori information on class of solution in search can reduce the computation difficulty duo to discrete kernel condition disadvantage and scarcity of the number of observations. The method is quiet statistical, which does not require the selection of regularization parameter; and hence it has appropriate inversion coefficient. The result of programming and tentat-ive calculation verifies the acquisition footprint can be removed through prestack data reconstruct. This paper applies migration to the processing method of removing the acquisition footprint. The fundamental principle and algorithms are surveyed, seismic traces are weighted according to the area which occupied by seismic trace in different source-receiver distances. Adopting grid method in stead of accounting the area of Voroni map can reduce difficulty of calculation the weight. The result of processing the model data and actual seismic demonstrate, incorporating a weighting scheme based on the relative area that is associated with each input trace with respect to its neighbors acts to minimize the artifacts caused by irregular acquisition geometry.
Resumo:
In the practical seismic profile multiple reflections tend to impede the task of even the experienced interpreter in deducing information from the reflection data. Surface multiples are usually much stronger, more broadband, and more of a problem than internal multiples because the reflection coefficient at the water surface is much larger than the reflection coefficients found in the subsurface. For this reason most attempts to remove multiples from marine data focus on surface multiples, as will I. A surface-related multiple attenuation method can be formulated as an iterative procedure. In this essay a fully data-driven approach which is called MPI —multiple prediction through inversion (Wang, 2003) is applied to a real marine seismic data example. This is a pretty promising scheme for predicting a relative accurate multiple model by updating the multiple model iteratively, as we usually do in a linearized inverse problem. The prominent characteristic of MPI method lie in that it eliminate the need for an explicit surface operator which means it can model the multiple wavefield without any knowledge of surface and subsurface structures even a source signature. Another key feature of this scheme is that it can predict multiples not only in time but also in phase and in amplitude domain. According to the real data experiments it is shown that this scheme for multiple prediction can be made very efficient if a good initial estimate of the multiple-free data set can be provided in the first iteration. In the other core step which is multiple subtraction we use an expanded multi-channel matching filter to fulfil this aim. Compared to a normal multichannel matching filter where an original seismic trace is matched by a group of multiple-model traces, in EMCM filter a seismic trace is matched by not only a group of the ordinary multiple-model traces but also their adjoints generated mathematically. The adjoints of a multiple-model trace include its first derivative, its Hilbert transform and the derivative of the Hilbert transform. The third chapter of the thesis is the application for the real data using the previous methods we put forward from which we can obviously find the effectivity and prospect of the value in use. For this specific case I have done three group experiments to test the effectiveness of MPI method, compare different subtraction results with fixed filter length but different window length, invest the influence of the initial subtraction result for MPI method. In terms of the real data application, we do fine that the initial demultiple estimate take on a great deal of influence for the MPI method. Then two approaches are introduced to refine the intial demultiple estimate which are first arrival and masking filter respectively. In the last part some conclusions are drawn in terms of the previous results I have got.
Resumo:
As we know, the essence of exploration is objective body determined by getting the information. Such as seismic、electrical and electromagnetic prospecting, they are the common methods of the exploration. Therefore, They have a complete set of theory now. In fact, the effective information can also be got by the diffusion way, it is called diffusion prospecting. The diffusion way prospecting is necessary and important. The way of diffusion prospecting is studied in the paper and main works include below: (1) On the basis of studying basic law of the diffusion, the paper gives the idea of diffusion wave and the formulas of computing diffusion wave function. (2) The paper studies the way of the diffusion prospecting and the methods of data processing. At the same time, it also expounds the characteristics and the applied foreground of the diffusion prospecting. (3) The paper gives the tomography idea and the basic method of diffusion CT. Meanwhile, it also expounds the foreground that the diffusion CT is applied in oil development prospecting. (4) As the inversion of the diffusion equation is a part of the diffusion prospecting way, the methods of diffusion equation inversion are studied and the two formulas are deduced --Laplace transform and polynomial fitting inversion formulas. As the other important result of diffusion equation inversion, the inversion can offer a new analysis method for well Testing in oil development. In order to show a set of methods in the paper feasible, forward、inversion and CT numerical simulation are done in the paper.
Resumo:
At present the main object of the exploration and development (E&D) of oil and gas is not the structural oil-gas pools but the subtle lithological oil-gas reservoir. Since the last 90's, the ratio of this kind of pools in newly-added oil reserves is becoming larger and larger, so is the ratio in the eastern oilfields. The third oil-gas resource evaluation indicates the main exploration object of Jiyang depression is the lithological oil-gas pools in future. However, lack of effective methods that are applied to search for this kind of pool makes E&D difficult and the cost high. In view of the urgent demand of E&D, in this paper we deeply study and analyze the theory and application in which the seismic attributes are used to predict and describe lithological oil-gas reservoirs. The great results are obtained by making full use of abundant physics and reservoir information as well as the remarkable lateral continuity involved in seismic data in combination with well logging, drilling-well and geology. ①Based on a great deal of research and different geological features of Shengli oilfield, the great progresses are made some theories and methods of seismic reservoir prediction and description. Three kinds of extrapolation near well seismic wavelet methods-inverse distance interpolation, phase interpolation and pseudo well reflectivity-are improved; particularly, in sparse well area the method of getting pseudo well reflectivity is given by the application of the wavelet theory. The formulae for seismic attributes and coherent volumes are derived theoretically, and the optimal method of seismic attributes and improved algorithms of picking up coherent data volumes are put forward. The method of making sequence analysis on seismic data is put forward and derived in which the wavelet transform is used to analyze not only qualitatively but also quantitatively seismic characteristics of reservoirs.② According to geologic model and seismic forward simulation, from macro to micro, the method of pre- and post-stack data synthetic analysis and application is put forward using seismic in close combination with geology; particularly, based on making full use of post-stack seismic data, "green food"-pre-stack seismic data is as possible as utilized. ③ In this paper, the formative law and distributing characteristic of lithologic oil-gas pools of the Tertiary in Jiyang depression, the knowledge of geological geophysics and the feasibility of all sorts of seismic methods, and the applied knowledge of seismic data and the geophysical mechanism of oil-gas reservoirs are studied. Therefore a series of perfect seismic technique and software are completed that fit to E&D of different categories of lithologic oil-gas reservoirs. ④ This achievement is different from other new seismic methods that are put forward in the recent years, that is multi-wave multi-component seismic, cross hole seismic, vertical seismic, and time-lapse seismic etc. that need the reacquisition of seismic data to predict and describe the oil-gas reservoir. The method in this paper is based on the conventional 2D/3D seismic data, so the cost falls sharply. ⑤ In recent years this technique that predict and describe lithologic oil-gas reservoirs by seismic information has been applied in E&D of lithologic oil-gas reservoirs on glutenite fans in abrupt slop and turbidite fans in front of abrup slop, slump turbidite fans in front of delta, turbidite fans with channel in low slope and channel sanbody, and a encouraging geologic result has been gained. This achievement indicates that the application of seismic information is one of the most effective ways in solving the present problem of E&D. This technique is significant in the application and popularization, and positive on increasing reserves and raising production as well as stable development in Shengli oilfield. And it will be directive to E&D of some similar reservoirs
Resumo:
With the large developments of the seismic sources theory, computing technologies and survey instruments, we can model and rebuild the rupture process of earthquakes more realistically. On which earthquake sources' properties and tectonic activities law are realized more clearly. The researches in this domain have been done in this paper as follows. Based on the generalized ray method, expressions for displacement on the surface of a half-space due to an arbitrary oriented shear and tensile dislocation are also obtained. Kinematically, fault-normal motion is equivalent to tensile faulting. There is some evidence that such motion occurs in many earthquakes. The expressions for static displacements on the surface of a layered half-space due to static point moment tensor source are given in terms of the generalized reflection and transmission coefficient matrix method. The validity and precision of the new method is illustrated by comparing the consistency of our results with the analytical solution given by Okada's code employing same point source and homogenous half-space model. The computed vertical ground displacement using the moment tensor solution of the Lanchang_Gengma earthquake displays considerable difference with that of a double couple component .The effect of a soft layer at the top of the homogenous half-space on a shallow normal-faulting earthquake is also analyzed. Our results show that more seismic information would be obtained utilizing seismic moment tensor source and layered half-space model. The rupture process of 1999 Chi-Chi, Taiwan, earthquake investigated by using co-seismic surface displacement GPS observations and far field P-wave records. In according to the tectonic analysis and distributions of aftershock, we introduce a three-segment bending fault planes into our model. Both elastic half-space models and layered-earth models to invert the distribution of co-seismic slip along the Chi-Chi earthquake rupture. The results indicate that the shear slip model can not fit horizontal and vertical co-seismic displacements together, unless we add the fault-normal motion (tensile component) in inversions. And then, the Chi Chi earthquake rupture process was obtained by inversion using the seismograms and GPS observations. Fault normal motions determined by inversion, concentrate on the shallow northern bending fault from Fengyuan to Shuangji where the surface earthquake ruptures reveal more complexity and the developed flexural slip folding structures than the other portions of the rupture zone For understanding the perturbation of surface displacements caused by near-surface complex structures, We have taken a numeric test to synthesize and inverse the surface displacements for a pop-up structure that is composed of a main thrust and a back thrust. Our result indicates that the pop-up structure, the typical shallow complex rupture that occurred in the northern bending fault zone form Fengyuan to Shuangji, can be modeled better by a thrust fault added negative tensile component than by a simple thrust fault. We interpret the negative tensile distributions, that concentrate on the shallow northern bending fault from Fengyuan to Shuangji, as a the synthetic effect including the complexities of property and geometry of rupture. The earthquake rupture process also reveal the more spatial and temporal complexities form Fenyuan to SHuangji. According to the three-components teleseismic records, the S-wave velocity structure beneath the 59 teleseismic stations of Taiwan obtained by using the transform function method and the SA techniques. The integrated results, the 3D crustal structure of Taiwan reveal that the thickest part of crustal local in the western Central Range. This conclusion is consistent with the result form the Bouguer gravity anomaly. The orogenic evolution of Taiwan is young period, and the developing foot of Central Range dose not in static balancing. The crustal of Taiwan stays in the course of dynamic equilibrium. The rupture process of 2003)2,24,Jiashi, Xinjiang earthquake was estimated by the finite fault model using far field broadband P wave records of CDSN and IRIS. The results indicate that the earthquake focal is north dip trust fault including some left-lateral strike slip. The focal mechanism of this earthquake is different form that of earthquakes occurred in 1997 and 1998, but similar to that of 1996, Artux, Xinjiang earthquake. We interpreted that the earthquake caused trust fault due to the Tarim basin pushing northward and orogeny of Tianshan mountain. In the end, give a brief of future research subject: Building the Real Time Distribute System for rupture process of Large Earthquakes Based on Internet.
Resumo:
The theory and approach of the broadband teleseismic body waveform inversion are expatiated in this paper, and the defining the crust structure's methods are developed. Based on the teleseismic P-wave data, the theoretic image of the P-wave radical component is calculated via the convolution of the teleseismic P-wave vertical component and the transform function, and thereby a P-wavefrom inversion method is built. The applied results show the approach effective, stable and its resolution high. The exact and reliable teleseismic P waveforms recorded by CDSN and IRIS and its geodynamics are utilized to obtain China and its vicinage lithospheric transfer functions, this region ithospheric structure is inverted through the inversion of reliable transfer functions, the new knowledge about the deep structure of China and its vicinage is obtained, and the reliable seismological evidence is provided to reveal the geodynamic evolution processes and set up the continental collisional theory. The major studies are as follows: Two important methods to study crustal and upper mantle structure -- body wave travel-time inversion and waveform modeling are reviewed systematically. Based on ray theory, travel-time inversion is characterized by simplicity, crustal and upper mantle velocity model can be obtained by using 1-D travel-time inversion preliminary, which introduces the reference model for studying focal location, focal mechanism, and fine structure of crustal and upper mantle. The large-scale lateral inhomogeneity of crustal and upper mantle can be obtained by three-dimensional t ravel-time seismic tomography. Based on elastic dynamics, through the fitting between theoretical seismogram and observed seismogram, waveform modeling can interpret the detail waveform and further uncover one-dimensional fine structure and lateral variation of crustal and upper mantle, especially the media characteristics of singular zones of ray. Whatever travel-time inversion and waveform modeling is supposed under certain approximate conditions, with respective advantages and disadvantages, and provide convincing structure information for elucidating physical and chemical features and geodynamic processes of crustal and upper mantle. Because the direct wave, surface wave, and refraction wave have lower resolution in investigating seismic velocity transitional zone, which is inadequate to study seismic discontinuities. On the contrary, both the converse and reflected wave, which sample the discontinuities directly, must be carefully picked up from seismogram to constrain the velocity transitional zones. Not only can the converse wave and reflected wave study the crustal structure, but also investigate the upper mantle discontinuities. There are a number of global and regional seismic discontinuities in the crustal and upper mantle, which plays a significant role in understanding physical and chemical properties and geodynamic processes of crustal and upper mantle. The broadband teleseismic P waveform inversion is studied particularly. The teleseismic P waveforms contain a lot of information related to source time function, near-source structure, propagation effect through the mantle, receiver structure, and instrument response, receiver function is isolated form teleseismic P waveform through the vector rotation of horizontal components into ray direction and the deconvolution of vertical component from the radial and tangential components of ground motion, the resulting time series is dominated by local receiver structure effect, and is hardly irrelevant to source and deep mantle effects. Receiver function is horizontal response, which eliminate multiple P wave reflection and retain direct wave and P-S converted waves, and is sensitive to the vertical variation of S wave velocity. Velocity structure beneath a seismic station has different response to radial and vertical component of an accident teleseismic P wave. To avoid the limits caused by a simplified assumption on the vertical response, the receiver function method is mended. In the frequency domain, the transfer function is showed by the ratio of radical response and vertical response of the media to P wave. In the time domain, the radial synthetic waveform can be obtained by the convolution of the transfer function with the vertical wave. In order to overcome the numerical instability, generalized reflection and transmission coefficient matrix method is applied to calculate the synthetic waveform so that all multi-reflection and phase conversion response can be included. A new inversion method, VFSA-LM method, is used in this study, which successfully combines very fast simulated annealing method (VFSA) with damped least square inversion method (LM). Synthetic waveform inversion test confirms its effectiveness and efficiency. Broadband teleseismic P waveform inversion is applied in lithospheric velocity study of China and its vicinage. According to the data of high quality CDSN and IRIS, we obtained an outline map showing the distribution of Asian continental crustal thickness. Based on these results gained, the features of distribution of the crustal thickness and outline of crustal structure under the Asian continent have been analyzed and studied. Finally, this paper advances the principal characteristics of the Asian continental crust. There exist four vast areas of relatively minor variations in the crustal thickness, namely, northern, eastern southern and central areas of Asian crust. As a byproduct, the earthquake location is discussed, Which is a basic issue in seismology. Because of the strong trade-off between the assumed initial time and focal depth and the nonlinear of the inversion problems, this issue is not settled at all. Aimed at the problem, a new earthquake location method named SAMS method is presented, In which, the objective function is the absolute value of the remnants of travel times together with the arrival times and use the Fast Simulated Annealing method is used to inverse. Applied in the Chi-Chi event relocation of Taiwan occurred on Sep 21, 2000, the results show that the SAMS method not only can reduce the effects of the trade-off between the initial time and focal depth, but can get better stability and resolving power. At the end of the paper, the inverse Q filtering method for compensating attenuation and frequency dispersion used in the seismic section of depth domain is discussed. According to the forward and inverse results of synthesized seismic records, our Q filtrating operator of the depth domain is consistent with the seismic laws in the absorbing media, which not only considers the effect of the media absorbing of the waves, but also fits the deformation laws, namely the frequency dispersion of the body wave. Two post stacked profiles about 60KM, a neritic area of China processed, the result shows that after the forward Q filtering of the depth domain, the wide of the wavelet of the middle and deep layers is compressed, the resolution and signal noise ratio are enhanced, and the primary sharp and energy distribution of the profile are retained.
Resumo:
The dissertation addressed the problems of signals reconstruction and data restoration in seismic data processing, which takes the representation methods of signal as the main clue, and take the seismic information reconstruction (signals separation and trace interpolation) as the core. On the natural bases signal representation, I present the ICA fundamentals, algorithms and its original applications to nature earth quake signals separation and survey seismic signals separation. On determinative bases signal representation, the paper proposed seismic dada reconstruction least square inversion regularization methods, sparseness constraints, pre-conditioned conjugate gradient methods, and their applications to seismic de-convolution, Radon transformation, et. al. The core contents are about de-alias uneven seismic data reconstruction algorithm and its application to seismic interpolation. Although the dissertation discussed two cases of signal representation, they can be integrated into one frame, because they both deal with the signals or information restoration, the former reconstructing original signals from mixed signals, the later reconstructing whole data from sparse or irregular data. The goal of them is same to provide pre-processing methods and post-processing method for seismic pre-stack depth migration. ICA can separate the original signals from mixed signals by them, or abstract the basic structure from analyzed data. I surveyed the fundamental, algorithms and applications of ICA. Compared with KL transformation, I proposed the independent components transformation concept (ICT). On basis of the ne-entropy measurement of independence, I implemented the FastICA and improved it by covariance matrix. By analyzing the characteristics of the seismic signals, I introduced ICA into seismic signal processing firstly in Geophysical community, and implemented the noise separation from seismic signal. Synthetic and real data examples show the usability of ICA to seismic signal processing and initial effects are achieved. The application of ICA to separation quake conversion wave from multiple in sedimentary area is made, which demonstrates good effects, so more reasonable interpretation of underground un-continuity is got. The results show the perspective of application of ICA to Geophysical signal processing. By virtue of the relationship between ICA and Blind Deconvolution , I surveyed the seismic blind deconvolution, and discussed the perspective of applying ICA to seismic blind deconvolution with two possible solutions. The relationship of PC A, ICA and wavelet transform is claimed. It is proved that reconstruction of wavelet prototype functions is Lie group representation. By the way, over-sampled wavelet transform is proposed to enhance the seismic data resolution, which is validated by numerical examples. The key of pre-stack depth migration is the regularization of pre-stack seismic data. As a main procedure, seismic interpolation and missing data reconstruction are necessary. Firstly, I review the seismic imaging methods in order to argue the critical effect of regularization. By review of the seismic interpolation algorithms, I acclaim that de-alias uneven data reconstruction is still a challenge. The fundamental of seismic reconstruction is discussed firstly. Then sparseness constraint on least square inversion and preconditioned conjugate gradient solver are studied and implemented. Choosing constraint item with Cauchy distribution, I programmed PCG algorithm and implement sparse seismic deconvolution, high resolution Radon Transformation by PCG, which is prepared for seismic data reconstruction. About seismic interpolation, dealias even data interpolation and uneven data reconstruction are very good respectively, however they can not be combined each other. In this paper, a novel Fourier transform based method and a algorithm have been proposed, which could reconstruct both uneven and alias seismic data. I formulated band-limited data reconstruction as minimum norm least squares inversion problem where an adaptive DFT-weighted norm regularization term is used. The inverse problem is solved by pre-conditional conjugate gradient method, which makes the solutions stable and convergent quickly. Based on the assumption that seismic data are consisted of finite linear events, from sampling theorem, alias events can be attenuated via LS weight predicted linearly from low frequency. Three application issues are discussed on even gap trace interpolation, uneven gap filling, high frequency trace reconstruction from low frequency data trace constrained by few high frequency traces. Both synthetic and real data numerical examples show the proposed method is valid, efficient and applicable. The research is valuable to seismic data regularization and cross well seismic. To meet 3D shot profile depth migration request for data, schemes must be taken to make the data even and fitting the velocity dataset. The methods of this paper are used to interpolate and extrapolate the shot gathers instead of simply embedding zero traces. So, the aperture of migration is enlarged and the migration effect is improved. The results show the effectiveness and the practicability.
Resumo:
In this paper, we have presented the combined preconditioner which is derived from k =±-1~(1/2) circulant extensions of the real symmetric positive-definite Toeplitz matrices, proved it with great efficiency and stability and shown that it is easy to make error analysis and to remove the boundary effect with the combined preconditioner. This paper has also presented the methods for the direct and inverse computation of the real Toeplitz sets of equations and discussed many problems correspondingly, especially replaced the Toeplitz matrices with the combined preconditoners for analysis. The paper has also discussed the spectral analysis and boundary effect. Finally, as an application in geophysics, the paper makes some discussion about the squared root of a real matrix which comes from the Laplace algorithm.
Resumo:
In this thesis we study the general problem of reconstructing a function, defined on a finite lattice from a set of incomplete, noisy and/or ambiguous observations. The goal of this work is to demonstrate the generality and practical value of a probabilistic (in particular, Bayesian) approach to this problem, particularly in the context of Computer Vision. In this approach, the prior knowledge about the solution is expressed in the form of a Gibbsian probability distribution on the space of all possible functions, so that the reconstruction task is formulated as an estimation problem. Our main contributions are the following: (1) We introduce the use of specific error criteria for the design of the optimal Bayesian estimators for several classes of problems, and propose a general (Monte Carlo) procedure for approximating them. This new approach leads to a substantial improvement over the existing schemes, both regarding the quality of the results (particularly for low signal to noise ratios) and the computational efficiency. (2) We apply the Bayesian appraoch to the solution of several problems, some of which are formulated and solved in these terms for the first time. Specifically, these applications are: teh reconstruction of piecewise constant surfaces from sparse and noisy observationsl; the reconstruction of depth from stereoscopic pairs of images and the formation of perceptual clusters. (3) For each one of these applications, we develop fast, deterministic algorithms that approximate the optimal estimators, and illustrate their performance on both synthetic and real data. (4) We propose a new method, based on the analysis of the residual process, for estimating the parameters of the probabilistic models directly from the noisy observations. This scheme leads to an algorithm, which has no free parameters, for the restoration of piecewise uniform images. (5) We analyze the implementation of the algorithms that we develop in non-conventional hardware, such as massively parallel digital machines, and analog and hybrid networks.
Resumo:
Ellis, D. I., Broadhurst, D., Kell, D. B., Rowland, J. J., Goodacre, R. (2002). Rapid and quantitative detection of the microbial spoilage of meat by Fourier Transform Infrared Spectroscopy and machine learning. ? Applied and Environmental Microbiology, 68, (6), 2822-2828 Sponsorship: BBSRC