243 resultados para Picking


Relevância:

10.00% 10.00%

Publicador:

Resumo:

本文在分析了EDI与CIMS系统集成的重要性后,提出了用面向报文的方法来组织和实施该项集成过程,然后对该过程中所涉及到的有关技术,包括信息提取、平文件映射和翻译方法等进行了分析,并用基于表格的方法建立平文件模式和映射参考文件,从而有效地实现了EDI与CIMS系统的有机集成

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Geophysical inversion is a theory that transforms the observation data into corresponding geophysical models. The goal of seismic inversion is not only wave velocity models, but also the fine structures and dynamic process of interior of the earth, expanding to more parameters such as density, aeolotropism, viscosity and so on. As is known to all, Inversion theory is divided to linear and non-linear inversion theories. In rencent 40 years linear inversion theory has formed into a complete and systematic theory and found extensive applications in practice. While there are still many urgent problems to be solved in non-linear inversion theory and practice. Based on wave equation, this dissertation has been mainly involved in the theoretical research of several non-linear inversion methods: waveform inversion, traveltime inversion and the joint inversion about two methods. The objective of gradient waveform inversion is to find a geologic model, thus synthetic seismograms generated by this geologic model are best fitted to observed seismograms. Contrasting with other inverse methods, waveform inversion uses all characteristics of waveform and has high resolution capacity. But waveform inversion is an interface by interface method. An artificial parameter limit should be provided in each inversion iteration. In addition, waveform information will tend to get stuck in local minima if the starting model is too far from the actual model. Based on velocity scanning in traditional seismic data processing, a layer-by-layer waveform inversion method is developed in this dissertation to deal with weaknesses of waveform inversion. Wave equation is used to calculate the traveltime and derivative (perturbation of traveltime with respect to velocity) in wave-equation traveltime inversion (WT). Unlike traditional ray-based travetime inversion, WT has many advantages. No ray tracing or traveltime picking and no high frequency assumption is necessary and good result can be got while starting model is far from real model. But, comparing with waveform inversion, WT has low resolution. Waveform inversion and WT have complementary advantages and similar algorithm, which proves that the joint inversion is a better inversion method. And another key point which this dissertation emphasizes is how to give fullest play to their complementary advantages on the premise of no increase of storage spaces and amount of calculation. Numerical tests are implemented to prove the feasibility of inversion methods mentioned above in this dissertation. Especially for gradient waveform inversion, field data are inversed. This field data are acquired by our group in Wali park and Shunyi district. Real data processing shows there are many problems for waveform inversion to deal with real data. The matching of synthetic seismograms with observed seismograms and noise cancellation are two primary problems. In conclusion, on the foundation of the former experiences, this dissertation has implemented waveform inversions on the basis of acoustic wave equation and elastic wave equation, traveltime inversion on the basis of acoustic wave equation and traditional combined waveform traveltime inversion. Besides the traditional analysis of inversion theory, there are two innovations: layer by layer inversion of seimic reflection data inversion and rapid method for acoustic wave-equation joint inversion.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Petroleum and natural gas is an important strategic resources. The short of the reserves will block the development of economy and threaten the safety of nation, along with the main oil fields of our country coming to the height of power and splendor of the exploitation and exploration. Therefore, it makes a great sense to inaugurate new explorative field and increase the reserves of petroleum and natural gas. Magnetic exploration is a main method of geophysics exploration. the developing observation apparatus and the perfect processing method provide wide space for magnetic exploration in these years. The method of magnetic bright spot is an application of magnetic exploration. The vertical migration of the hydrocarbon changes physical and chemical environment above the hydrocarbon reservoir, the new environment make tervalent iron translate into bivalent iron, that produce small scale magnetic anomaly, that is magnetic bright spot. The method of magnetic bright spot explores oil and gas field by the relation between the hydrocarbon and magnetic anomaly. This paper systemically research to pick-up and identify magnetic bright spot combining an oil field item, then point out advantaged area. In order to test the result, the author use the seismic information to superpose the magnetic bright spot, that prove the magnetic bright spot is reliable. then, the author complete a software to pick and identify the magnetic bright spot. The magnetic basement is very important to research forming and evolvement of the basin, especially, it is a crucial parameter of exploring residual basin in the research on pre-Cenozoic residual. This paper put forward a new method to inverse the interface of the magnetic layer on the basis of previous work, that is the method of separation of magnetic field step by step. The theory of this method is to translate the result of magnetic layer fluctuation to the result of magnetization density change, and the magnetic layer is flat, the paper choose thickness of magnetic layer as unit thickness, and define magnetic layer as a unit-thickness layer in order to convenient calculation, at the same time, define the variational magnetization density as equivalent magnetic density. Then we translate the relation between magnetic field and layer fluctuation to the relation between magnetic field and equivalent magnetic density, then, we can obtain the layer fluctuation through calculating equivalent magnetic density. Contrast to conventional parker method, model experimentation and example checkout prove this method is effective. The merit of this method is to avoid flat result in a strongly fluctuant area because of using a uniform average depth, the result of this method is closer to the fact, and this method is to inverse equivalent magnetic density, then translate equivalent magnetic density to layer fluctuation, this lays a foundation to inverse variational magnetic density in the landscape orientation and portrait.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Seismic signal is a typical non-stationary signal, whose frequency is continuously changing with time and is determined by the bandwidth of seismic source and the absorption characteristic of the media underground. The most interesting target of seismic signal’s processing and explaining is to know about the local frequency’s abrupt changing with the time, since this kind of abrupt changing is indicating the changing of the physical attributes of the media underground. As to the seismic signal’s instantaneous attributes taken from time-frequency domain, the key target is to search a effective, non-negative and fast algorithm time-frequency distribution, and transform the seismic signal into this time-frequency domain to get its instantaneous power spectrum density, and then use the process of weighted adding and average etc. to get the instantaneous attributes of seismic signal. Time-frequency analysis as a powerful tool to deal with time variant non-stationary signal is becoming a hot researching spot of modern signal processing, and also is an important method to make seismic signal’s attributes analysis. This kind of method provides joint distribution message about time domain and frequency domain, and it clearly plots the correlation of signal’s frequency changing with the time. The spectrum decomposition technique makes seismic signal’s resolving rate reach its theoretical level, and by the method of all frequency scanning and imaging the three dimensional seismic data in frequency domain, it improves and promotes the resolving abilities of seismic signal vs. geological abnormal objects. Matching pursuits method is an important way to realize signal’s self-adaptive decomposition. Its main thought is that any signal can be expressed by a series of time-frequency atoms’ linear composition. By decomposition the signal within an over completed library, the time-frequency atoms which stand for the signal itself are selected neatly and self-adaptively according to the signal’s characteristics. This method has excellent sparse decomposition characteristics, and is widely used in signal de-noising, signal coding and pattern recognizing processing and is also adaptive to seismic signal’s decomposition and attributes analysis. This paper takes matching pursuits method as the key research object. As introducing the principle and implementation techniques of matching pursuits method systematically, it researches deeply the pivotal problems of atom type’s selection, the atom dictionary’s discrete, and the most matching atom’s searching algorithm, and at the same time, applying this matching pursuits method into seismic signal’s processing by picking-up correlative instantaneous messages from time-frequency analysis and spectrum decomposition to the seismic signal. Based on the research of the theory and its correlative model examination of the adaptively signal decomposition with matching pursuit method, this paper proposes a fast optimal matching time-frequency atom’s searching algorithm aimed at seismic signal’s decomposition by frequency-dominated pursuit method and this makes the MP method pertinence to seismic signal’s processing. Upon the research of optimal Gabor atom’s fast searching and matching algorithm, this paper proposes global optimal searching method using Simulated Annealing Algorithm, Genetic Algorithm and composed Simulated Annealing and Genetic Algorithm, so as to provide another way to implement fast matching pursuit method. At the same time, aimed at the characteristics of seismic signal, this paper proposes a fast matching atom’s searching algorithm by means of designating the max energy points of complex seismic signal, searching for the most optimal atom in the neighbor area of these points according to its instantaneous frequency and instantaneous phase, and this promotes the calculating efficiency of seismic signal’s matching pursuit algorithm. According to these methods proposed above, this paper implements them by programmed calculation, compares them with some open algorithm and proves this paper’s conclusions. It also testifies the active results of various methods by the processing of actual signals. The problems need to be solved further and the aftertime researching targets are as follows: continuously seeking for more efficient fast matching pursuit algorithm and expanding its application range, and also study the actual usage of matching pursuit method.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This dissertation starts from the point that the prestack time migration can been considered as an approximation of the prestack depth migration, giving a wave equation based prestack time migration approach. The new approach includes: analytically getting the travel time and amplitude based on the one way wave equation and the stationary-phase theory, using ‘spread’ imaging method and imaging following the prestack depth migration, updating the velocity model with respect to the flats of the events in CRP gathers. Based on this approach, we present a scheme that can image land seismic data without field static correction. We may determine the correct near surface velocities and stack velocities by picking up the residual correction of the events in the CRP gathers. We may get the rational migration section based on the updated velocities and correct the migration section from a floating datum plane to a universal datum plane. We may adaptively determine the migration aperture according to the dips of the imaging structures. This not only speed up the processing, but may suppress the migration noise produce by the extra aperture. We adopt the deconvolution imaging condition of wave equation migration. It may partially compensate the geometric divergence. In this scheme, we use the table-driven technique which may enhance the computational efficiency. If the subsurface is much more complicated, it may be impossible to distinguish the DTS curve. To solve this problem, we proposed a technique to determine the appropriate range of the DTS curve. We synthesize DTS panel in this range using different velocities and depths, and stack the amplitude around the zero time. Determine the correct velocity and location of the considered grid point by comparing the values.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Our study deals with the high resolution body wave tomography in North china and adjacent areas(30°N-43°N,100°E-130°E), where earthquakes occurred many times in history and has a very complicated geological structure. 6870 events recorded at 273 digital seismic stations from CDSN during 1996-2002 and stations settled by Seislab of IGCAS in Bohai Bay area, including 1382 local earthquakes and 5488 teleseismic earthquakes are used in this study. In the data we used, the average number of received stations is greater than 5, the error of picking up direct arrival time is 0.1-0.5s. Before the inversion, we use Checkerboard method to confirm the reliability of result of Local events; use Restoring Resolution Test to confirm the reliability of result of teleseismic events. We also analyzed the effect of different parameters in the inversion. Based the analysis above, the model used in this paper is divided into small blocks with a dimension of 0.33°in the latitude and longitude directions and 5km、15km、30km in depth, and initial velocity model. Using pseudobending method to calculate the ray traveling path, LSQR algorithm to inverse, finally, we got the body velocity images below 25km and above 480km in this area using Joint- inversion with local events and teleseismic events. We made the conclusion at last: (1)at top zone of the south of Sichuan Basin , there exits low velocity anomalies, below 40km is the high velocity zone extend to 300km; (2) Above the 40km of Ordos block exits low velocity zone, while below 40km until 240km, the high velocity anomalies are interlaced by low velocity anomalies. Below 300km, the anomalies are unclear any more; (3) On the whole, the velocity structure below 400km on the mantle transition zone of Eastern China area shows its changes from low velocity to high velocity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

At present the main object of the exploration and development (E&D) of oil and gas is not the structural oil-gas pools but the subtle lithological oil-gas reservoir. Since the last 90's, the ratio of this kind of pools in newly-added oil reserves is becoming larger and larger, so is the ratio in the eastern oilfields. The third oil-gas resource evaluation indicates the main exploration object of Jiyang depression is the lithological oil-gas pools in future. However, lack of effective methods that are applied to search for this kind of pool makes E&D difficult and the cost high. In view of the urgent demand of E&D, in this paper we deeply study and analyze the theory and application in which the seismic attributes are used to predict and describe lithological oil-gas reservoirs. The great results are obtained by making full use of abundant physics and reservoir information as well as the remarkable lateral continuity involved in seismic data in combination with well logging, drilling-well and geology. ①Based on a great deal of research and different geological features of Shengli oilfield, the great progresses are made some theories and methods of seismic reservoir prediction and description. Three kinds of extrapolation near well seismic wavelet methods-inverse distance interpolation, phase interpolation and pseudo well reflectivity-are improved; particularly, in sparse well area the method of getting pseudo well reflectivity is given by the application of the wavelet theory. The formulae for seismic attributes and coherent volumes are derived theoretically, and the optimal method of seismic attributes and improved algorithms of picking up coherent data volumes are put forward. The method of making sequence analysis on seismic data is put forward and derived in which the wavelet transform is used to analyze not only qualitatively but also quantitatively seismic characteristics of reservoirs.② According to geologic model and seismic forward simulation, from macro to micro, the method of pre- and post-stack data synthetic analysis and application is put forward using seismic in close combination with geology; particularly, based on making full use of post-stack seismic data, "green food"-pre-stack seismic data is as possible as utilized. ③ In this paper, the formative law and distributing characteristic of lithologic oil-gas pools of the Tertiary in Jiyang depression, the knowledge of geological geophysics and the feasibility of all sorts of seismic methods, and the applied knowledge of seismic data and the geophysical mechanism of oil-gas reservoirs are studied. Therefore a series of perfect seismic technique and software are completed that fit to E&D of different categories of lithologic oil-gas reservoirs. ④ This achievement is different from other new seismic methods that are put forward in the recent years, that is multi-wave multi-component seismic, cross hole seismic, vertical seismic, and time-lapse seismic etc. that need the reacquisition of seismic data to predict and describe the oil-gas reservoir. The method in this paper is based on the conventional 2D/3D seismic data, so the cost falls sharply. ⑤ In recent years this technique that predict and describe lithologic oil-gas reservoirs by seismic information has been applied in E&D of lithologic oil-gas reservoirs on glutenite fans in abrupt slop and turbidite fans in front of abrup slop, slump turbidite fans in front of delta, turbidite fans with channel in low slope and channel sanbody, and a encouraging geologic result has been gained. This achievement indicates that the application of seismic information is one of the most effective ways in solving the present problem of E&D. This technique is significant in the application and popularization, and positive on increasing reserves and raising production as well as stable development in Shengli oilfield. And it will be directive to E&D of some similar reservoirs

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Seismic exploration is the main tools of exploration for petroleum. as the society needs more petroleum and the level of exploration is going up, the exploration in the area of complex geology construction is the main task in oil industry, so the seismic prestack depth migration appeared, it has good ability for complex construction imaging. Its result depends on the velocity model strongly. So for seismic prestack depth migration has become the main research area. In this thesis the difference in seismic prestack depth migration between our country and the abroad has been analyzed in system. the tomographical method with no layer velocity model, the residual curve velocity analysical method based on velocity model and the deleting method in pre-processing have been developed. In the thesis, the tomographysical method in velocity analysis is been analyzed at first. It characterized with perfection in theory and diffculity in application. This method use the picked first arrivial, compare the difference between the picked first arrival and the calculated arrival in theory velocity model, and then anti-projected the difference along the ray path to get the new velocity model. This method only has the hypothesis of high frequency, no other hypothesis. So it is very effective and has high efficiency. But this method has default still. The picking of first arrival is difficult in the prestack data. The reasons are the ratio of signal to noise is very low and many other event cross each other in prestack data. These phenomenon appear strongly in the complex geology construction area. Based on these a new tomophysical methos in velocity analysis with no layer velocity model is been developed. The aim is to solve the picking problem. It do not need picking the event time contiunely. You can picking in random depending on the reliability. This methos not only need the pick time as the routine tomographysical mehtod, but also the slope of event. In this methos we use the high slope analysis method to improve the precision of picking. In addition we also make research on the residual curve velocity analysis and find that its application is not good and the efficiency is low. The reasons is that the hypothesis is rigid and it is a local optimizing method, it can solve seismic velocity problem in the area with laterical strong velocity variation. A new method is developed to improve the precision of velocity model building . So far the pattern of seismic prestack depth migration is the same as it aborad. Before the work of velocity building the original seismic data must been corrected on a datum plane, and then to make the prestack depth migration work. As we know the successful example is in Mexico bay. It characterized with the simple surface layer construction, the pre-precessing is very simple and its precision is very high. But in our country the main seismic work is in land, the surface layer is very complex, in some area the error of pre-precessing is big, it affect the velocity building. So based on this a new method is developed to delete the per-precessing error and improve the precision of velocity model building. Our main work is, (1) developing a effective tomographical velocity building method with no layer velocity model. (2) a new high resolution slope analysis method is developed. (3) developing a global optimized residual curve velocity buliding method based on velocity model. (4) a effective method of deleting the pre-precessing error is developing. All the method as listed above has been ceritified by the theorical calculation and the actual seismic data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Butovskaya, a scholar of Former Soviet Union, first determined the depth of basalt layer in Tashkent Zone by using converted waves on seismogram in 1952. From then on, more and more scholars developed the comprehensive research that imaged the earth interior structures by applying converted waves information. With the digitalization of earthquake observation, The inversion imaging of complete or partial waveform record can efficiently improve inversion quality and widen its usage scope, therefore great progress is made in converted wave imaging. This paper makes a certain study in converted wave imaging on that basis. Transmitted PP waves and converted PS waves are generated when a P-wave propagates through an interface separating two media with large impedance contracts. A PS converted wave is a seismic body wave, which result from the conversion of an incident parent P wave at a boundary within the crust to a refracted S wave. The thickness of a single crustal layer can theoretically be determined by observing, with three-componented seismometer at a single station, the difference in time of the arrival of the parent P wave and the arrival of the PS converted wave. For a multilayered media, PS converted wave arrivals corresponding to each of the layers can theoretically be observed, provided the station is sufficiently from the source of the parent P wave to allow initial penetration of the P wave beneath the deepest layer considered. To avoid the difficulty of picking up transmitted P-wave and converted wave phases, this paper proposed a converted wave migration method by estimating the travel time difference between PS converted wave and PP transmitted wave. To verify its validity, we apply the converted wave PS migration algorithm to synthetic data generated by three forward modeling. The migration results indicate that PS converted wave may be migrated to reconstruct the transmitting interface. This technique is helpful to investigate the deep earth structures by using earthquake data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Reflectivity sequences extraction is a key part of impedance inversion in seismic exploration. Although many valid inversion methods exist, with crosswell seismic data, the frequency brand of seismic data can not be broadened to satisfy the practical need. It is an urgent problem to be solved. Pre-stack depth migration which developed in these years becomes more and more robust in the exploration. It is a powerful technology of imaging to the geological object with complex structure and its final result is reflectivity imaging. Based on the reflectivity imaging of crosswell seismic data and wave equation, this paper completed such works as follows: Completes the workflow of blind deconvolution, Cauchy criteria is used to regulate the inversion(sparse inversion). Also the precondition conjugate gradient(PCG) based on Krylov subspace is combined with to decrease the computation, improves the speed, and the transition matrix is not necessary anymore be positive and symmetric. This method is used to the high frequency recovery of crosswell seismic section and the result is satisfactory. Application of rotation transform and viterbi algorithm in the preprocess of equation prestack depth migration. In equation prestack depth migration, the grid of seismic dataset is required to be regular. Due to the influence of complex terrain and fold, the acquisition geometry sometimes becomes irregular. At the same time, to avoid the aliasing produced by the sparse sample along the on-line, interpolation should be done between tracks. In this paper, I use the rotation transform to make on-line run parallel with the coordinate, and also use the viterbi algorithm to complete the automatic picking of events, the result is satisfactory. 1. Imaging is a key part of pre-stack depth migration besides extrapolation. Imaging condition can influence the final result of reflectivity sequences imaging greatly however accurate the extrapolation operator is. The author does migration of Marmousi under different imaging conditions. And analyzes these methods according to the results. The results of computation show that imaging condition which stabilize source wave field and the least-squares estimation imaging condition in this paper are better than the conventional correlation imaging condition. The traditional pattern of "distributed computing and mass decision" is wisely adopted in the field of seismic data processing and becoming an obstacle of the promoting of the enterprise management level. Thus at the end of this paper, a systemic solution scheme, which employs the mode of "distributed computing - centralized storage - instant release", is brought forward, based on the combination of C/S and B/S release models. The architecture of the solution, the corresponding web technology and the client software are introduced. The application shows that the validity of this scheme.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

To pick velocity automatically is not only helpful to improve the efficiency of seismic data process, but also to provide quickly the initial velocity for prestack depth migration. In this thesis, we use the Viterbi algorithm to do automatic picking, but the velocity picked usually is immoderate. By thorough study and analysis, we think that the Viterbi algorithm has the function to do quickly and effectually automatic picking, but the data provided for picking maybe not continuous on derivative of its curved surface, viz., the curved face on velocity spectrum is not slick. Therefore, the velocity picked may include irrational velocity information. To solve the problem above, we develop a new method to filter signal by performing nonlinear transformation of coordinate and filter of function. Here, we call it as Gravity Center Preserved Pulse Compressed Filter (GCPPCF). The main idea to perform the GCPPCF as follows: separating a curve, such as a pulse, to several subsection, calculating the gravity center (coordinate displacement), and then assign the value (density) on the subsection to gravity center. When gravity center departure away from center of its subsection, the value assigned to gravity center is smaller than the actual one, but non other than gravity center anastomoses fully with its subsection center, the assigned value equal to the actual one. By doing so, the curve shape under new coordinate breadthwise narrows down compare to its original one. It is a process of nonlinear transformation of coordinate, due to gravity center changing with the shape of subsection. Furthermore, the gravity function is filter one, because it is a cause of filtering that the value assigned from subsection center to gravity center is obtained by calculating its weight mean of subsetion function. In addition, the filter has the properties of the adaptive time delay changed filter, owing to the weight coefficient used for weight mean also changes with the shape of subsection. In this thesis, the Viterbi algorithm inducted, being applied to auto pick the stack velocity, makes the rule to integral the max velocity spectrum ("energy group") forward and to get the optimal solution in recursion backward. It is a convenient tool to pick automatically velocity. The GCPPCF above not only can be used to preserve the position of peak value and compress the velocity spectrum, but also can be used as adaptive time delay changed filter to smooth object curved line or curved face. We apply it to smooth variable of sequence observed to get a favourable source data ta provide for achieving the final exact resolution. If there is no the adaptive time delay-changed filter to perform optimization, we can't get a finer source data and also can't valid velocity information, moreover, if there is no the Viterbi algorithm to do shortcut searching, we can't pick velocity automatically. Accordingly, combination of both of algorithm is to make an effective method to do automatic picking. We apply the method of automatic picking velocity to do velocity analysis of the wavefield extrapolated. The results calculated show that the imaging effect of deep layer with the wavefield extrapolated was improved dominantly. The GCPPCF above has achieved a good effect in application. It not only can be used to optimize and smooth velocity spectrum, but also can be used to perform a correlated process for other type of signal. The method of automatic picking velocity developed in this thesis has obtained favorable result by applying it to calculate single model, complicated model (Marmousi model) and also the practical data. The results show that it not only has feasibility, but also practicability.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In exploration seismology, the geologic target of oil and gas reservoir in complex medium request the high accuracy image of the structure and lithology of the medium. So the study of the prestack image and the elastic inversion of seismic wave in the complex medium come to the leading edge. The seismic response measured at the surface carries two fundamental pieces of information: the propagation effects of the medium and the reflections from the different layer boundaries in the medium. The propagation represent the low-wavenumber component of the medium, it is so-called the trend or macro layering, whereas the reflections represent the high-wavenumber component of the medium, it is called the detailed or fine layering. The result of migration velocity analysis is the resolution of the low-wavenumber component of the medium, but the prestack elastic inversion provided the resolution of the high-wavvenumber component the medium. In the dissertation, the two aspects about the migration velocity estimation and the elastic inversion have been studied.Firstly, any migration velocity analysis methods must include two basic elements: the criterion that tell us how to know whether the model parameters are correct and the updating that tell us how to update the model parameters when they are incorrect, which are effected on the properties and efficiency of the velocity estimation method. In the dissertation, a migration velocity analysis method based on the CFP technology has been presented in which the strategy of the top-down layer stripping approach are adapted to avoid the difficult of the selecting reduce .The proposed method has a advantage that the travel time errors obtained from the DTS panel are defined directly in time which is the difference with the method based on common image gather in which the residual curvature measured in depth should be converted to travel time errors.In the proposed migration velocity analysis method, the four aspects have been improved as follow:? The new parameterization of velocity model is provided in which the boundaries of layers are interpolated with the cubic spline of the control location and the velocity with a layer may change along with lateral position but the value is calculated as a segmented linear function of the velocity of the lateral control points. The proposed parameterization is suitable to updating procedure.? The analytical formulas to represent the travel time errors and the model parameters updates in the t-p domain are derived under local lateral homogeneous. The velocity estimations are iteratively computed as parametric inversion. The zero differential time shift in the DTS panel for each layer show the convergence of the velocity estimation.? The method of building initial model using the priori information is provided to improve the efficiency of velocity analysis. In the proposed method, Picking interesting events in the stacked section to define the boundaries of the layers and the results of conventional velocity analysis are used to define the velocity value of the layers? An interactive integrate software environment with the migration velocity analysis and prestack migration is built.The proposed method is firstly used to the synthetic data. The results of velocity estimation show both properties and efficiency of the velocity estimation are very good.The proposed method is also used to the field data which is the marine data set. In this example, the prestack and poststack depth migration of the data are completed using the different velocity models built with different method. The comparison between them shows that the model from the proposed method is better and improves obviously the quality of migration.In terms of the theoretical method of expressing a multi-variable function by products of single-variable functions which is suggested by Song Jian (2001), the separable expression of one-way wave operator has been studied. A optimization approximation with separable expression of the one-way wave operator is presented which easily deal with the lateral change of velocity in space and wave number domain respectively and has good approach accuracy. A new prestack depth migration algorithm based on the optimization approximation separable expression is developed and used to testing the results of velocity estimation.Secondly, according to the theory of the seismic wave reflection and transmission, the change of the amplitude via the incident angle is related to the elasticity of medium in the subsurface two-side. In the conventional inversion with poststack datum, only the information of the reflection operator at the zero incident angles can be used. If the more robust resolutions are requested, the amplitudes of all incident angles should be used.A natural separable expression of the reflection/transmission operator is represented, which is the sum of the products of two group functions. One group function vary with phase space whereas other group function is related to elastic parameters of the medium and geological structure.By employing the natural separable expression of the reflection/transmission operator, the method of seismic wave modeling with the one-way wave equation is developed to model the primary reflected waves, it is adapt to a certain extent heterogeneous media and confirms the accuracy of AVA of the reflections when the incident angle is less than 45'. The computational efficiency of the scheme is greatly high.The natural separable expression of the reflection/transmission operator is also used to construct prestack elastic inversion algorithm. Being different from the AVO analysis and inversion in which the angle gathers formed during the prstack migration are used, the proposed algorithm construct a linear equations during the prestack migration by the separable expression of the reflection/transmission operator. The unknowns of the linear equations are related to the elasticity of the medium, so the resolutions of them provided the elastic information of the medium.The proposed method of inversion is the same as AVO inversion in , the difference between them is only the method processing the amplitude via the incident angle and computational domain.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Our media is saturated with claims of ``facts'' made from data. Database research has in the past focused on how to answer queries, but has not devoted much attention to discerning more subtle qualities of the resulting claims, e.g., is a claim ``cherry-picking''? This paper proposes a Query Response Surface (QRS) based framework that models claims based on structured data as parameterized queries. A key insight is that we can learn a lot about a claim by perturbing its parameters and seeing how its conclusion changes. This framework lets us formulate and tackle practical fact-checking tasks --- reverse-engineering vague claims, and countering questionable claims --- as computational problems. Within the QRS based framework, we take one step further, and propose a problem along with efficient algorithms for finding high-quality claims of a given form from data, i.e. raising good questions, in the first place. This is achieved to using a limited number of high-valued claims to represent high-valued regions of the QRS. Besides the general purpose high-quality claim finding problem, lead-finding can be tailored towards specific claim quality measures, also defined within the QRS framework. An example of uniqueness-based lead-finding is presented for ``one-of-the-few'' claims, landing in interpretable high-quality claims, and an adjustable mechanism for ranking objects, e.g. NBA players, based on what claims can be made for them. Finally, we study the use of visualization as a powerful way of conveying results of a large number of claims. An efficient two stage sampling algorithm is proposed for generating input of 2d scatter plot with heatmap, evalutaing a limited amount of data, while preserving the two essential visual features, namely outliers and clusters. For all the problems, we present real-world examples and experiments that demonstrate the power of our model, efficiency of our algorithms, and usefulness of their results.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Mixed flocks of pale-bellied Brent geese (Branta bernicla hrota) and wigeon (Anas penelope L.) feeding on intertidal Zostera spp were studied during October 1993 with respect to tidal position, feeding method and duration, and competitive: interactions within and between species. Owing to many disturbance incidents affecting the use of the site by wildfowl, only complete data on flow tides were presented. Brent geese fed over a greater period of the tidal cycle than wigeon. Differences in feeding methods indicated that Brent geese exploited the rhizomes, which are energetically more profitable than the shoot on which wigeon fed. Aggressive interactions were recorded within species but there were no records of aggression between species. More subtle competition for space, however, may have occurred during feeding. Brent geese could reach Zostera spp For a short period after increasing depth of water prevented access by wigeon. However, individual wigeon were observed foraging near feeding Brent geese, picking up the scraps oi material discarded by the latter, and small numbers of wigeon may benefit from the presence of the geese. These benefits for some individual wigeon are not considered to compensate for the disadvantages to the latter species population as a whole in feeding on poorer-quality food for a shorter period of the tidal cycle. This disadvantage is likely to have contributed to the decline in the wigeon population on Strangford Lough, Co. Down, while numbers of Brent geese have been maintained at a high level.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Although coordinated patterns of body movement can be used to communicate action intention, they can also be used to deceive. Often known as deceptive movements, these unpredictable patterns of body movement can give a competitive advantage to an attacker when trying to outwit a defender. In this particular study, we immersed novice and expert rugby players in an interactive virtual rugby environment to understand how the dynamics of deceptive body movement influence a defending player’s decisions about how and when to act. When asked to judge final running direction, expert players who were found to tune into prospective tau-based information specified in the dynamics of ‘honest’ movement signals (Centre of Mass), performed significantly better than novices who tuned into the dynamics of ‘deceptive’ movement signals (upper trunk yaw and out-foot placement) (p<.001). These findings were further corroborated in a second experiment where players were able to move as if to intercept or ‘tackle’ the virtual attacker. An analysis of action responses showed that experts waited significantly longer before initiating movement (p<.001). By waiting longer and picking up more information that would inform about future running direction these experts made significantly fewer errors (p<.05). In this paper we not only present a mathematical model that describes how deception in body-based movement is detected, but we also show how perceptual expertise is manifested in action expertise. We conclude that being able to tune into the ‘honest’ information specifying true running action intention gives a strong competitive advantage.