60 resultados para inverse probability weighted


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Methacrylic acid based inverse opal hydrogels (MIOHs) have been prepared by controlling the synthesis conditions, including cross-linker content, solvent content, and water content in solvent mixtures to explore the effect of the synthesis conditions (especially solvent content and mixture) on the response performance. Various response events (pH, solvent, ionic strength, 1,4-phenylenediamine dihydrochloride (PDA) response) have been investigated. For pH, solvent response, the same response behaviors have been observed: both the increased solvent (only ethanol) content and the enhanced water content in solvent will lead to the reduced response level of MIOHs compared to that of the increased cross-linker content. However, two different kinds of response behaviors for ionic strength response have been found by adjusting the synthesis conditions. The kinetics of pH response shows characteristics of a diffusion-limited process, and the equilibrium response time is about 20 min, which cannot be reduced by changing the synthesis conditions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A new method of measuring the mean size of solvent clusters in swollen polymer membrane is presented in this paper. This method is based on a combination of inverse gas chromatography (IGC) and equilibrium swelling. The mechanism is that weight fraction activity coefficient of solvent in swollen polymer is influenced by its clusters size. The mean clusters size of solvent in swollen polymer can be calculated as the quotient of the weight fraction activity coefficient of clustering system dividing the weigh fraction activity coefficient of non-clustering system. In this experiment, the weigh fraction activity coefficient of non-clustering system was measured with IGC. Methanol, ethanol and polyimide systems were tested with the new method at three temperatures, 20, 40, and 60degreesC. The mean clusters size of methanol in polyimide was five, four, and three at each temperature condition, respectively. Ethanol did not form clusters (the mean clusters size was one). In contrast to the inherent narrow temperature range in DSC, XRD, and FTIR methods, the temperature range in IGC and equilibrium swelling is broad. Compared with DSC. XRD. and FTIR, this new method can detect the clusters of solvent-polymer system at higher temperature.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The probability distribution of the four-phase structure invariants (4PSIs) involving four pairs of structure factors is derived by integrating the direct methods with isomorphous replacement (IR). A simple expression of the reliability parameter for 16 types of invariant is given in the case of a native protein and a heavy-atom derivative. Test calculations on a protein and its heavy-atom derivative using experimental diffraction data show that the reliability for 4PSI estimates is comparable with that for the three-phase structure invariants (3PSIs), and that a large-modulus invariants method can be used to improve the accuracy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The main aim of this paper is to investigate the effects of the impulse and time delay on a type of parabolic equations. In view of the characteristics of the equation, a particular iteration scheme is adopted. The results show that Under certain conditions on the coefficients of the equation and the impulse, the solution oscillates in a particular manner-called "asymptotic weighted-periodicity".

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Based on the second-order solutions obtained for the three-dimensional weakly nonlinear random waves propagating over a steady uniform current in finite water depth, the joint statistical distribution of the velocity and acceleration of the fluid particle in the current direction is derived using the characteristic function expansion method. From the joint distribution and the Morison equation, the theoretical distributions of drag forces, inertia forces and total random forces caused by waves propagating over a steady uniform current are determined. The distribution of inertia forces is Gaussian as that derived using the linear wave model, whereas the distributions of drag forces and total random forces deviate slightly from those derived utilizing the linear wave model. The distributions presented can be determined by the wave number spectrum of ocean waves, current speed and the second order wave-wave and wave-current interactions. As an illustrative example, for fully developed deep ocean waves, the parameters appeared in the distributions near still water level are calculated for various wind speeds and current speeds by using Donelan-Pierson-Banner spectrum and the effects of the current and the nonlinearity of ocean waves on the distribution are studied. (c) 2006 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

模糊C-means算法在聚类分析中已得到了成功的应用,本文提出一种利用模糊C-means算法消除噪声的新方法。一般来说,图象中的噪声点就是其灰度值与其周围象素的灰度值之差超过某个门限值的点。根据这个事实,首先利用模糊C-means算法分类,再利用标准核函数检测出噪声点,然后将噪声点去掉。由于只修改噪声点处的象素灰度值,而对于其它象素的灰度值不予改变,所以本算法能够很好地保护细节和边缘。本方法每次处理3×3个点,而以往的方法只能每次处理一个点,所以本方法能提高运算速度。文中给出了利用本方法对实际图象处理的结果,并与梯度倒数权值法进行了定量的比较。

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In China and world, more than half the recent basin discovered reserves involve lithologic hydrocarbon reservoir reserves. The major target for further hydrocarbon basin exploration is the subtle reservoir. The Liaodong Bay prospect is much important in Bohai Sea, which includes Liaoxi low uplift, Liaodong uplift, Liaoxi sag and Liaozhong sag. After dozens years’ exploration in Liaodong Bay, few unexplored big-and-middle-sized favorable structural traps are remained and most of the stock structure targets are bad for fragmentary. Thus seeking for new prospect area and making a breakthrough, have become the unique way to relieve the severe exploration condition in Liaodong Bay. Technique Route Based on the petrophysical property of target area, the seismic forward inference of typical subtle trap model is expanded with analysis of logging, seismic and geologic data. According to petrophysical characteristics and forward inference and research on seismic response of actual seismic data in target area, the optimization of geophysical technique is used in subtle trap identification and the geophysical identification technique system of subtle reservoir is formed. The Key Research ① Petrophysical Model The petrophysical parameter is the basic parameter for seismic wave simulation. The seismic response difference of rocks bearing different fluids is required. With the crossplot of log data, the influence of petrophysical parameters on rock elastic properties of target area is analyzed, such as porosity, shale index, fluid property and saturation. Based on the current research on Biot-Gassmann and Kuster-Toksoz model, the petrophysical parameter calculator program which can be used for fluid substitution is established. ② S-wave evaluation based on conventional log data The shear velocity is needed during forward inference of AVO or other elastic wave field. But most of the recent conventional log data is lack of shear wave. Thus according to the research on petrophysical model, the rock S-wave parameter can be evaluated from conventional log data with probability inverse method. ③ AVO forward modeling based on well data For 6 wells in JZ31-6 block and 9 wells in LD22-1 block, the AVO forward modeling recording is made by log curve. The classification of AVO characteristics in objective interval is made by the lithologic information. ④ The 2D parameter model building and forward modeling of subtle hydrocarbon trap in target area. According to the formation interpretation of ESS03D seismic area, the 2D parameter model building and seismic wave field forward modeling are carried on the given and predicted subtle hydrocarbon trap with log curve. ⑤ The lithology and fluid identification of subtle trap in target area After study the seismic response characteristics of lithology and fluid in given target area, the optimization of geophysical technique is used for lithology identification and fluid forecast. ⑥The geophysical identification technique system of subtle reservoir The Innovative Points of this Paper ① Based on laboratory measurement and petrophysical model theory, the rock S-wave parameter can be evaluated from conventional log data with probability inverse method. Then the fluid substitution method based on B-G and K-T theory is provided. ② The method and workflow for simulating seismic wave field property of subtle hydrocarbon trap are established based on the petrophysical model building and forward modeling of wave equation. ③ The description of subtle trap structural feature is launched. According to the different reflection of frequency wave field structural attribute, the fluid property of subtle trap can be identified by wave field attenuation attribute and absorption analysis. ④ It’s the first time to identify subtle trap by geophysical technique and provide exploration drilling well location. ⑤ The technique system of subtle reservoir geophysical identification is formed to provide available workflow and research ideas for other region of interest.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Seismic technique is in the leading position for discovering oil and gas trap and searching for reserves throughout the course of oil and gas exploration. It needs high quality of seismic processed data, not only required exact spatial position, but also the true information of amplitude and AVO attribute and velocity. Acquisition footprint has an impact on highly precision and best quality of imaging and analysis of AVO attribute and velocity. Acquisition footprint is a new conception of describing seismic noise in 3-D exploration. It is not easy to understand the acquisition footprint. This paper begins with forward modeling seismic data from the simple sound wave model, then processes it and discusses the cause for producing the acquisition footprint. It agreed that the recording geometry is the main cause which leads to the distribution asymmetry of coverage and offset and azimuth in different grid cells. It summarizes the characters and description methods and analysis acquisition footprint’s influence on data geology interpretation and the analysis of seismic attribute and velocity. The data reconstruct based on Fourier transform is the main method at present for non uniform data interpolation and extrapolate, but this method always is an inverse problem with bad condition. Tikhonov regularization strategy which includes a priori information on class of solution in search can reduce the computation difficulty duo to discrete kernel condition disadvantage and scarcity of the number of observations. The method is quiet statistical, which does not require the selection of regularization parameter; and hence it has appropriate inversion coefficient. The result of programming and tentat-ive calculation verifies the acquisition footprint can be removed through prestack data reconstruct. This paper applies migration to the processing method of removing the acquisition footprint. The fundamental principle and algorithms are surveyed, seismic traces are weighted according to the area which occupied by seismic trace in different source-receiver distances. Adopting grid method in stead of accounting the area of Voroni map can reduce difficulty of calculation the weight. The result of processing the model data and actual seismic demonstrate, incorporating a weighting scheme based on the relative area that is associated with each input trace with respect to its neighbors acts to minimize the artifacts caused by irregular acquisition geometry.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Block theory is an effective method on stability analysis of fractured rigid rock mass. There are a lot of discontinuous planes developed in rock mass of Jinping II hydropower station conveyor tunnel, so the stability of conveyor tunnel is related with whether there are unstable blocks on excavation planes. This paper deals with the stability of conveyor tunnel with stereo-analytical method for block theory on the basis of detailed investigation of rock mass data, and makes judgements on the movable blocks sliding types which are induced by all rock discontinuous planes and every excavation plane of conveyor tunnel. A conclusion is obtained that the sliding type of blocks is mainly single sliding, and a relatively few sliding types of double-sided sliding and vertical block falling; Also, the obvious statistical distribution result on movable blocks in conveyor tunnel indicates that there are a bit more instability blocks in left wall, left and right arches than right wall. In this paper, the stochastic probability model is drawn into block theory to study the sliding probability of key block on the basis of detailed investigation of its rock mass data and the development of the discontinuous planes in rock mass of Jinping II hydropower station conveyor tunnel. And some following conclusions are obtained. The relationship between trace length and the probability of instability of key block is inverse ratio. The probability of 1-3m primary joints are relatively higher. Key block containing joints J2 is relatively stable and the reinforcement of the arch would be crucial in the conveyor tunnel. They are all useful to offer effective reinforcement design and have important engineering values.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The seismic survey is the most effective geophysical method during exploration and development of oil/gas. As a main means in processing and interpreting seismic data, impedance inversion takes up a special position in seismic survey. This is because the impedance parameter is a ligament which connects seismic data with well-logging and geological information, while it is also essential in predicting reservoir properties and sand-body. In fact, the result of traditional impedance inversion is not ideal. This is because the mathematical inverse problem of impedance is poor-pose so that the inverse result has instability and multi-result, so it is necessary to introduce regularization. Most simple regularizations are presented in existent literature, there is a premise that the image(or model) is globally smooth. In fact, as an actual geological model, it not only has made of smooth region but also be separated by the obvious edge, the edge is very important attribute of geological model. It's difficult to preserve these characteristics of the model and to avoid an edge too smooth to clear. Thereby, in this paper, we propose a impedance inverse method controlled by hyperparameters with edge-preserving regularization, the inverse convergence speed and result would be improved. In order to preserve the edge, the potential function of regularization should satisfy nine conditions such as basic assumptions edge preservation and convergence assumptions etc. Eventually, a model with clear background and edge-abnormity can be acquired. The several potential functions and the corresponding weight functions are presented in this paper. The potential functionφLφHL andφGM can meet the need of inverse precision by calculating the models. For the local constant planar and quadric models, we respectively present the neighborhood system of Markov random field corresponding to the regularization term. We linearity nonlinear regularization by using half-quadratic regularization, it not only preserve the edge, and but also simplify the inversion, and can use some linear methods. We introduced two regularization parameters (or hyperparameters) λ2 and δ in the regularization term. λ2 is used to balance the influence between the data term and the transcendental term; δ is a calibrating parameter used to adjust the gradient value at the discontinuous position(or formation interface). Meanwhile, in the inverse procedure, it is important to select the initial value of hyperparameters and to change hyperparameters, these will then have influence on convergence speed and inverse effect. In this paper, we roughly give the initial value of hyperparameters by using a trend- curve of φ-(λ2, δ) and by a method of calculating the upper limit value of hyperparameters. At one time, we change hyperparameters by using a certain coefficient or Maximum Likelihood method, this can be simultaneously fulfilled with the inverse procedure. Actually, we used the Fast Simulated Annealing algorithm in the inverse procedure. This method overcame restrictions from the local extremum without depending on the initial value, and got a global optimal result. Meanwhile, we expound in detail the convergence condition of FSA, the metropolis receiving probability form Metropolis-Hasting, the thermal procession based on the Gibbs sample and other methods integrated with FSA. These content can help us to understand and improve FSA. Through calculating in the theoretic model and applying it to the field data, it is proved that the impedance inverse method in this paper has the advantage of high precision practicability and obvious effect.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The dissertation addressed the problems of signals reconstruction and data restoration in seismic data processing, which takes the representation methods of signal as the main clue, and take the seismic information reconstruction (signals separation and trace interpolation) as the core. On the natural bases signal representation, I present the ICA fundamentals, algorithms and its original applications to nature earth quake signals separation and survey seismic signals separation. On determinative bases signal representation, the paper proposed seismic dada reconstruction least square inversion regularization methods, sparseness constraints, pre-conditioned conjugate gradient methods, and their applications to seismic de-convolution, Radon transformation, et. al. The core contents are about de-alias uneven seismic data reconstruction algorithm and its application to seismic interpolation. Although the dissertation discussed two cases of signal representation, they can be integrated into one frame, because they both deal with the signals or information restoration, the former reconstructing original signals from mixed signals, the later reconstructing whole data from sparse or irregular data. The goal of them is same to provide pre-processing methods and post-processing method for seismic pre-stack depth migration. ICA can separate the original signals from mixed signals by them, or abstract the basic structure from analyzed data. I surveyed the fundamental, algorithms and applications of ICA. Compared with KL transformation, I proposed the independent components transformation concept (ICT). On basis of the ne-entropy measurement of independence, I implemented the FastICA and improved it by covariance matrix. By analyzing the characteristics of the seismic signals, I introduced ICA into seismic signal processing firstly in Geophysical community, and implemented the noise separation from seismic signal. Synthetic and real data examples show the usability of ICA to seismic signal processing and initial effects are achieved. The application of ICA to separation quake conversion wave from multiple in sedimentary area is made, which demonstrates good effects, so more reasonable interpretation of underground un-continuity is got. The results show the perspective of application of ICA to Geophysical signal processing. By virtue of the relationship between ICA and Blind Deconvolution , I surveyed the seismic blind deconvolution, and discussed the perspective of applying ICA to seismic blind deconvolution with two possible solutions. The relationship of PC A, ICA and wavelet transform is claimed. It is proved that reconstruction of wavelet prototype functions is Lie group representation. By the way, over-sampled wavelet transform is proposed to enhance the seismic data resolution, which is validated by numerical examples. The key of pre-stack depth migration is the regularization of pre-stack seismic data. As a main procedure, seismic interpolation and missing data reconstruction are necessary. Firstly, I review the seismic imaging methods in order to argue the critical effect of regularization. By review of the seismic interpolation algorithms, I acclaim that de-alias uneven data reconstruction is still a challenge. The fundamental of seismic reconstruction is discussed firstly. Then sparseness constraint on least square inversion and preconditioned conjugate gradient solver are studied and implemented. Choosing constraint item with Cauchy distribution, I programmed PCG algorithm and implement sparse seismic deconvolution, high resolution Radon Transformation by PCG, which is prepared for seismic data reconstruction. About seismic interpolation, dealias even data interpolation and uneven data reconstruction are very good respectively, however they can not be combined each other. In this paper, a novel Fourier transform based method and a algorithm have been proposed, which could reconstruct both uneven and alias seismic data. I formulated band-limited data reconstruction as minimum norm least squares inversion problem where an adaptive DFT-weighted norm regularization term is used. The inverse problem is solved by pre-conditional conjugate gradient method, which makes the solutions stable and convergent quickly. Based on the assumption that seismic data are consisted of finite linear events, from sampling theorem, alias events can be attenuated via LS weight predicted linearly from low frequency. Three application issues are discussed on even gap trace interpolation, uneven gap filling, high frequency trace reconstruction from low frequency data trace constrained by few high frequency traces. Both synthetic and real data numerical examples show the proposed method is valid, efficient and applicable. The research is valuable to seismic data regularization and cross well seismic. To meet 3D shot profile depth migration request for data, schemes must be taken to make the data even and fitting the velocity dataset. The methods of this paper are used to interpolate and extrapolate the shot gathers instead of simply embedding zero traces. So, the aperture of migration is enlarged and the migration effect is improved. The results show the effectiveness and the practicability.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Formation resistivity is one of the most important parameters to be evaluated in the evaluation of reservoir. In order to acquire the true value of virginal formation, various types of resistivity logging tools have been developed. However, with the increment of the proved reserves, the thickness of interest pay zone is becoming thinner and thinner, especially in the terrestrial deposit oilfield, so that electrical logging tools, limited by the contradictory requirements of resolution and investigation depth of this kinds of tools, can not provide the true value of the formation resistivity. Therefore, resitivity inversion techniques have been popular in the determination of true formation resistivity based on the improving logging data from new tools. In geophysical inverse problems, non-unique solution is inevitable due to the noisy data and deficient measurement information. I address this problem in my dissertation from three aspects, data acquisition, data processing/inversion and applications of the results/ uncertainty evaluation of the non-unique solution. Some other problems in the traditional inversion methods such as slowness speed of the convergence and the initial-correlation results. Firstly, I deal with the uncertainties in the data to be processed. The combination of micro-spherically focused log (MSFL) and dual laterolog(DLL) is the standard program to determine formation resistivity. During the inversion, the readings of MSFL are regarded as the resistivity of invasion zone of the formation after being corrected. However, the errors can be as large as 30 percent due to mud cake influence even if the rugose borehole effects on the readings of MSFL can be ignored. Furthermore, there still are argues about whether the two logs can be quantitatively used to determine formation resisitivities due to the different measurement principles. Thus, anew type of laterolog tool is designed theoretically. The new tool can provide three curves with different investigation depths and the nearly same resolution. The resolution is about 0.4meter. Secondly, because the popular iterative inversion method based on the least-square estimation can not solve problems more than two parameters simultaneously and the new laterolog logging tool is not applied to practice, my work is focused on two parameters inversion (radius of the invasion and the resistivty of virgin information ) of traditional dual laterolog logging data. An unequal weighted damp factors- revised method is developed to instead of the parameter-revised techniques used in the traditional inversion method. In this new method, the parameter is revised not only dependency on the damp its self but also dependency on the difference between the measurement data and the fitting data in different layers. At least 2 iterative numbers are reduced than the older method, the computation cost of inversion is reduced. The damp least-squares inversion method is the realization of Tikhonov's tradeoff theory on the smooth solution and stability of inversion process. This method is realized through linearity of non-linear inversion problem which must lead to the dependency of solution on the initial value of parameters. Thus, severe debates on efficiency of this kinds of methods are getting popular with the developments of non-linear processing methods. The artificial neural net method is proposed in this dissertation. The database of tool's response to formation parameters is built through the modeling of the laterolog tool and then is used to training the neural nets. A unit model is put forward to simplify the dada space and an additional physical limitation is applied to optimize the net after the cross-validation method is done. Results show that the neural net inversion method could replace the traditional inversion method in a single formation and can be used a method to determine the initial value of the traditional method. No matter what method is developed, the non-uniqueness and uncertainties of the solution could be inevitable. Thus, it is wise to evaluate the non-uniqueness and uncertainties of the solution in the application of inversion results. Bayes theorem provides a way to solve such problems. This method is illustrately discussed in a single formation and achieve plausible results. In the end, the traditional least squares inversion method is used to process raw logging data, the calculated oil saturation increased 20 percent than that not be proceed compared to core analysis.