951 resultados para stereo matching problem
Resumo:
从二维空间和三维空间2种角度研究误匹配滤波算法,提出在匹配前用于降低误匹配的灰度预处理算法和一种基于真实控制点的视差滤波算法。前者只针对2幅图像的重叠区域进行灰度均衡,可以减少计算量,后者在传统视差均值滤波的基础上可进一步提高误匹配的滤波效率。基于真实图像的实验结果表明,新算法可以有效滤除误匹配,提高三维重建精度,保证重建效果。
Resumo:
基于鞍钢新轧集团股份有限公司冷轧薄板厂MES工程的实际情况,介绍了机组排产作业计划过程中的任务分配方法。该方法以机组与生产任务的最佳匹配、机组的负荷平衡为性能指标,采用了实时最小负荷分配规则,科学地解决了冷轧生产线CIMS环境下机组排产作业计划的在线生成问题,实现了生产线上的生产路径优化控制,从而提高了生产质量和设备利用率。
Resumo:
With the Oil field exploration and exploitation, the problem of supervention and enhaning combination gas recovery was faced.then proposing new and higher demands to precision of seismic data. On the basis of studying exploration status,resource potential,and quality of 3D seismic data to internal representative mature Oil field, taking shengli field ken71 zone as study object, this paper takes advantage of high-density 3D seismic technique to solving the complex geologic problem in exploration and development of mature region, deep into researching the acquisition, processing of high-density 3D seismic data. This disseration study the function of routine 3D seismic, high-density 3D seismic, 3D VSP seismic,and multi-wave multi-component seismic to solving the geologic problem in exploration and development of mature region,particular introduce the advantage and shortage of high-density 3D seismic exploration, put forward the integrated study method of giving priority to high-density 3D seismic and combining other seismic data in enhancing exploration accuracy of mature region. On the basis of detailedly studying acquisition method of high-density 3D seismic and 3D VSP seismic,aming at developing physical simulation and numeical simulation to designing and optimizing observation system. Optimizing “four combination” whole acquisition method of acquisition of well with ground seimic and “three synchron”technique, realizing acquisition of combining P-wave with S-wave, acquisition of combining digit geophone with simulation geophone, acquisition of 3D VSP seismic with ground seimic, acquisition of combining interborehole seismic,implementing synchron acceptance of aboveground equipment and downhole instrument, common use and synchron acceptance of 3D VSP and ground shots, synchron acquisition of high-density P-wave and high-density multi-wave, achieve high quality magnanimity seismic data. On the basis of detailedly analysising the simulation geophone data of high-density acquisition ,adopting pertinency processing technique to protecting amplitude,studying the justice matching of S/N and resolution to improving resolution of seismic profile ,using poststack series connection migration,prestack time migration and prestack depth migration to putting up high precision imaging,gained reliable high resolution data.At the same time carrying along high accuracy exploration to high-density digit geophone data, obtaining good improve in its resolution, fidelity, break point clear degree, interbed information, formation characteristics and so on.Comparing processing results ,we may see simulation geophone high-density acquisition and high precision imaging can enhancing resolution, high-density seismic basing on digit geophone can better solve subsurface geology problem. At the same time, fine processing converted wave of synchron acquisition and 3D VSP seismic data,acquiring good result. On the basis of high-density seismic data acquisition and high-density seismic data processing, carry through high precision structure interpretation and inversion, and preliminary interpretation analysis to 3D VSP seismic data and multi-wave multi-component seismic data. High precision interpretation indicates after high resolution processing ,structural diagram obtaining from high-density seismic data better accord with true geoligy situation.
Resumo:
With the continuously proceeding of petroleum exploratory development in China, exploratory development becomes more and more difficult. For increasing reserve volume and production, lithologic hydrocarbon reservoir has been the most workable, potential and universality exploration targets. In the past, Dagang Oil Field use the complicated fault reservoir theory as the guide, develop and form a suit of matching construction and instrument in prospecting complicated fault reservoir that reach top of exploration industry in China. But the research of lithologic hydrocarbon reservoir is not much, which affects the exploitation progress of lithologic hydrocarbon reservoir. In this thesis, is object, through the depth study of lithologic deposition in Shasan segment of Zhouqingzhuang Oil Field, a suit of holographic fine reservoir bed forecasting techniques is built up and finally gets following main results: 1. Applying geology, seism, drilling, logging and other information to sensitivity preferences, geological model, inversion and integrated stratum evaluation, realizing the method and flow of refined multi-information stratum forecast. 2. Built up a full three dimensional fine structural interpretation method: in view of r problem of accurately demarcating 90% inclined well, propose a inclined well air space demarcating method, make bed demarcating more exactly; in view of problem of faults demarcating and combination in seismic interpretation, propose a computational method of seismic interference based on wavelet translation, make identify the fault in different level more dependable and reasonable; for exactly identifying structural attitude, propose a velocity modeling method under multi-well restriction, make structural attitude closer to the facts. 3. Built up a high accuracy reservoir bed inversion method: in view of problem in exactly identifying reservoir and nonreservoir with conventional wave impedance inversion method in this place, propose a reservoir log response characteristic analysis and sensible log parameter inversion method. ①analysis log response of reservoir and nonreservoir in region of interest, make definite the most sensible log parameter in identifying reservoir and nonreservoir in this region; ②make sensible log parameter inversion based on wave impedance inversion, to improve inversion accuracy, the thickness of recognizable reservoir bed reach 4-5m. 4. Built up a 4-D reservoir forcasting circuit: in view of difficulty that in lithologic hydrocarbon reservoir making reservoir space characteristic clear by using structural map and reservoir forecasting techniques once only, propose a 4-D reservoir forcasting circuit. In other words, based on development conceptual design, forcast reservoir of different time, namely multiple 3D reservoir forcasting in time queue, each time the accuracy degree of reservoir forcasting is improved since apply the new well material, thereby achieve high quality and highly efficient in exploratory development. During exploratory development lithologic depositin in Shasan segment of Zhouqingzhuang Oil Field, there are thirteen wells get 100% success rate, which sufficiently proves that this suit of method is scientific and effective.
Resumo:
The problem of oil and gas migration and accumulation have been investigated for many years in petroleum geology field. However, it is still the most weak link. It is a challenge task to research the question about the dynamics of hydrocarbon migration and accumulation. The research area of this article,Chengbei step-fault zone is the important exploration area of Dagang oil field.The oil distribution is complicated in this area because of abundant faults and rock-reservoir-cap assemblage.In recent years, oil shows is often discovered, but no large-scale pool is found. The most important problem influencing exolore decision is lake of kowning about accumulation process of oil and resources potential. According to the geology characteristic and exolore difficult, the analysis principles of dynamics is used in this paper. The course from source to reservoir is considered as main research line, and relation of valid source rcok, migration dynamic and heterogeneous distribution of carrier is discussed especially in key time. By use of numerial model the couling of migration and passage is realized and dynamic process of oil migration is analysed quantitatively. On the basis of other research about structure and sendiment, basin model is built and parameters are choiced. The author has reconstructed characteristic and distribution of fluid dynamical in main pool-forming time by numerical model. The systems of oil migration and acuumulaiton are divided according to distribution of fluid potential. Furthermore, the scope of valid sourece rock and scale of discharging hydrocarbon is studied in geology history by the method of producting hydrocarbon poential. In carrier research, it is outstanding to analyse the function that fault controls the oil-gas migration and accumulation. According to the mechanism of fault sealing, the paper author puts forward a new quantitative method evaluating fault opening and sealing properties-fault connective probability by using the oil and gas shows in footwall and hangwall reservoir as the index of identifying fault sealing or non-sealing. In this method, many influencing factors are considered synthetically. Then the faut sealing propery of different position in third deimention of faults controlling hydrocarbon acummulation are quantitative evaluated, and it laies a foundation for building compex carrier systems. Ten models of carrier and dynamical are establishe by analysis of matching relation of all kinds of carriers in main pool-forming period. The forming process and distribution of main pathway has been studied quantitatively by Buoyancy-Percolation mode, which can conbine valid source rock, migration dynamical and carrier. On the basis of oil-gas migration and accumulation model, the author computes the loss of hydrocarbon in secondary migration, ahead of cap formation, and the quantity of valueless accumulation according to the stage of migration and accumulation and the losing mechamism. At the same time, resource potential is evaluated in every migration and accumulation system. It shows that the quanlity of middle systems arrive to 5.67×108t, which has a huge explore potential prospect. Finally, according to the result of quantitve analysis above mentioned, the favorable explore aims are forcasted by the way of overlapping migration pathway and valid trap and considering factors of pool-forming. The drilling of actual wells proved that the study result is credible. It would offer strong support to optimize explore project in Chengbei step-fault zone.
Resumo:
The topic of Dynamic reservoir model and the distribution of remaining oil after polymer injection of Shengtuo oilfield is a front problem of "the 11th Five-Year Plan" scientific and technological disciplines of Sinopec Corporation. Reservoirs in study area is distributary channel sandstone. After 34 years of water-injection exploitation and 7 years of polymer injection pilot experiments, a highly complex heterogeneous dynamic evolution has been occurred in macro and micro parameters of reservoir model, together with its flow field. Therefore, it’s essential to construct completed reservoir dynamic model for a successfully prediction of the distribution of remaining oil. With a comprehensive application of multidisciplinary theory and technique, using a variety of data and information to maximize the use of computer technology, combining a static and dynamic, macro and micro and 1~4D integration, the research reveals main features, evolution and mechanism, types of geological disasters and their destructivity of reservoir flow field, the macro field, the micro field, the flow field and reservoir development hydrodynamic geological function in different development periods after a long term of polymer injection in Es2 in Shengtuo oilfield. The principle innovation achievements obtained are: 1. Established A, B, C, D four flow units in target formations, revealed the various features and distribution of flow units. 2. Stated environmental pollution and geological disasters induced during oilfield exploitation in study area, and also explained their formation mechanism, controlling factors, destructivity and approaches to disaster reduction. 3. Established dynamic evolution of the macro parameter model, micro-matrix field, pore network field, clay minerals field, seepage dynamic evolution model of six different exploitation stages in study area, also revealed reservoir flow evolution, the law of evolution mechanism after polymer injection. 4. Established macro and micro distribution model of remaining oil after three mining polymer injection during different water cut periods in study area, revealed the formation mechanism and distribution of remaining oil. 5. Established remaining oilforecasting model in study area, and forecasted the formation and distribution of remaining oil in the following six years. 6. It is proposed that reservoir fluid dynamic geological processes are major driving forces for the evolution of different water cut periods, reservoir macro field after the polymer injection and micro seepage field. 7. Established a dynamic reservoir model, proposed matching theory, methods and technology for the description of the remaining oil characterization and prediction, which can deepen the theory and techniques of continental rift basin development geology. Key words: Polymer reservoir; Geological disasters; Dynamic model; Residual oil forecast
Resumo:
In the practical seismic profile multiple reflections tend to impede the task of even the experienced interpreter in deducing information from the reflection data. Surface multiples are usually much stronger, more broadband, and more of a problem than internal multiples because the reflection coefficient at the water surface is much larger than the reflection coefficients found in the subsurface. For this reason most attempts to remove multiples from marine data focus on surface multiples, as will I. A surface-related multiple attenuation method can be formulated as an iterative procedure. In this essay a fully data-driven approach which is called MPI —multiple prediction through inversion (Wang, 2003) is applied to a real marine seismic data example. This is a pretty promising scheme for predicting a relative accurate multiple model by updating the multiple model iteratively, as we usually do in a linearized inverse problem. The prominent characteristic of MPI method lie in that it eliminate the need for an explicit surface operator which means it can model the multiple wavefield without any knowledge of surface and subsurface structures even a source signature. Another key feature of this scheme is that it can predict multiples not only in time but also in phase and in amplitude domain. According to the real data experiments it is shown that this scheme for multiple prediction can be made very efficient if a good initial estimate of the multiple-free data set can be provided in the first iteration. In the other core step which is multiple subtraction we use an expanded multi-channel matching filter to fulfil this aim. Compared to a normal multichannel matching filter where an original seismic trace is matched by a group of multiple-model traces, in EMCM filter a seismic trace is matched by not only a group of the ordinary multiple-model traces but also their adjoints generated mathematically. The adjoints of a multiple-model trace include its first derivative, its Hilbert transform and the derivative of the Hilbert transform. The third chapter of the thesis is the application for the real data using the previous methods we put forward from which we can obviously find the effectivity and prospect of the value in use. For this specific case I have done three group experiments to test the effectiveness of MPI method, compare different subtraction results with fixed filter length but different window length, invest the influence of the initial subtraction result for MPI method. In terms of the real data application, we do fine that the initial demultiple estimate take on a great deal of influence for the MPI method. Then two approaches are introduced to refine the intial demultiple estimate which are first arrival and masking filter respectively. In the last part some conclusions are drawn in terms of the previous results I have got.
Resumo:
In modem signal Processing,non-linear,non-Gaussian and non-stable signals are usually the analyzed and Processed objects,especially non-stable signals. The convention always to analyze and Process non-stable signals are: short time Fourier transform,Wigner-Ville distribution,wavelet Transform and so on. But the above three algorithms are all based on Fourier Transform,so they all have the shortcoming of Fourier Analysis and cannot get rid of the localization of it. Hilbert-Huang Transform is a new non-stable signal processing technology,proposed by N. E. Huang in 1998. It is composed of Empirical Mode Decomposition (referred to as EMD) and Hilbert Spectral Analysis (referred to as HSA). After EMD Processing,any non-stable signal will be decomposed to a series of data sequences with different scales. Each sequence is called an Intrinsic Mode Function (referred to as IMF). And then the energy distribution plots of the original non-stable signal can be found by summing all the Hilbert spectrums of each IMF. In essence,this algorithm makes the non-stable signals become stable and decomposes the fluctuations and tendencies of different scales by degrees and at last describes the frequency components with instantaneous frequency and energy instead of the total frequency and energy in Fourier Spectral Analysis. In this case,the shortcoming of using many fake harmonic waves to describe non-linear and non-stable signals in Fourier Transform can be avoided. This Paper researches in the following parts: Firstly,This paper introduce the history and development of HHT,subsequently the characters and main issues of HHT. This paper briefly introduced the basic realization principles and algorithms of Hilbert-Huang transformation and confirms its validity by simulations. Secondly, This paper discuss on some shortcoming of HHT. By using FFT interpolation, we solve the problem of IMF instability and instantaneous frequency undulate which are caused by the insufficiency of sampling rate. As to the bound effect caused by the limitation of envelop algorithm of HHT, we use the wave characteristic matching method, and have good result. Thirdly, This paper do some deeply research on the application of HHT in electromagnetism signals processing. Based on the analysis of actual data examples, we discussed its application in electromagnetism signals processing and noise suppression. Using empirical mode decomposition method and multi-scale filter characteristics can effectively analyze the noise distribution of electromagnetism signal and suppress interference processing and information interpretability. It has been founded that selecting electromagnetism signal sessions using Hilbert time-frequency energy spectrum is helpful to improve signal quality and enhance the quality of data.
Resumo:
With the development of oil and gas exploration, the exploration of the continental oil and gas turns into the exploration of the subtle oil and gas reservoirs from the structural oil and gas reservoirs in China. The reserves of the found subtle oil and gas reservoirs account for more than 60 percent of the in the discovered oil and gas reserves. Exploration of the subtle oil and gas reservoirs is becoming more and more important and can be taken as the main orientation for the increase of the oil and gas reserves. The characteristics of the continental sedimentary facies determine the complexities of the lithological exploration. Most of the continental rift basins in East China have entered exploration stages of medium and high maturity. Although the quality of the seismic data is relatively good, this areas have the characteristics of the thin sand thickness, small faults, small range of the stratum. It requests that the seismic data have high resolution. It is a important task how to improve the signal/noise ratio of the high frequency of seismic data. In West China, there are the complex landforms, the deep embedding the targets of the prospecting, the complex geological constructs, many ruptures, small range of the traps, the low rock properties, many high pressure stratums and difficulties of boring well. Those represent low signal/noise ratio and complex kinds of noise in the seismic records. This needs to develop the method and technique of the noise attenuation in the data acquisition and processing. So that, oil and gas explorations need the high resolution technique of the geophysics in order to solve the implementation of the oil resources strategy for keep oil production and reserves stable in Ease China and developing the crude production and reserves in West China. High signal/noise ratio of seismic data is the basis. It is impossible to realize for the high resolution and high fidelity without the high signal/noise ratio. We play emphasis on many researches based on the structure analysis for improving signal/noise ratio of the complex areas. Several methods are put forward for noise attenuation to truly reflect the geological features. Those can reflect the geological structures, keep the edges of geological construction and improve the identifications of the oil and gas traps. The ideas of emphasize the foundation, give prominence to innovate, and pay attention to application runs through the paper. The dip-scanning method as the center of the scanned point inevitably blurs the edges of geological features, such as fault and fractures. We develop the new dip scanning method in the shap of end with two sides scanning to solve this problem. We bring forward the methods of signal estimation with the coherence, seismic wave characteristc with coherence, the most homogeneous dip-sanning for the noise attenuation using the new dip-scanning method. They can keep the geological characters, suppress the random noise and improve the s/n ratio and resolution. The rutine dip-scanning is in the time-space domain. Anew method of dip-scanning in the frequency-wavenumber domain for the noise attenuation is put forward. It use the quality of distinguishing between different dip events of the reflection in f-k domain. It can reduce the noise and gain the dip information. We describe a methodology for studying and developing filtering methods based on differential equations. It transforms the filtering equations in the frequency domain or the f-k domain into time or time-space domains, and uses a finite-difference algorithm to solve these equations. This method does not require that seismic data be stationary, so their parameters can vary at every temporal and spatial point. That enhances the adaptability of the filter. It is computationally efficient. We put forward a method of matching pursuits for the noise suppression. This method decomposes any signal into a linear expansion of waveforms that are selected from a redundant dictionary of functions. These waveforms are chosen in order to best match the signal structures. It can extract the effective signal from the noisy signal and reduce the noise. We introduce the beamforming filtering method for the noise elimination. Real seismic data processing shows that it is effective in attenuating multiples and internal multiples. The s/n ratio and resolution are improved. The effective signals have the high fidelity. Through calculating in the theoretic model and applying it to the real seismic data processing, it is proved that the methods in this paper can effectively suppress the random noise, eliminate the cohence noise, and improve the resolution of the seismic data. Their practicability is very better. And the effect is very obvious.
Resumo:
Several algorithms for optical flow are studied theoretically and experimentally. Differential and matching methods are examined; these two methods have differing domains of application- differential methods are best when displacements in the image are small (<2 pixels) while matching methods work well for moderate displacements but do not handle sub-pixel motions. Both types of optical flow algorithm can use either local or global constraints, such as spatial smoothness. Local matching and differential techniques and global differential techniques will be examined. Most algorithms for optical flow utilize weak assumptions on the local variation of the flow and on the variation of image brightness. Strengthening these assumptions improves the flow computation. The computational consequence of this is a need for larger spatial and temporal support. Global differential approaches can be extended to local (patchwise) differential methods and local differential methods using higher derivatives. Using larger support is valid when constraint on the local shape of the flow are satisfied. We show that a simple constraint on the local shape of the optical flow, that there is slow spatial variation in the image plane, is often satisfied. We show how local differential methods imply the constraints for related methods using higher derivatives. Experiments show the behavior of these optical flow methods on velocity fields which so not obey the assumptions. Implementation of these methods highlights the importance of numerical differentiation. Numerical approximation of derivatives require care, in two respects: first, it is important that the temporal and spatial derivatives be matched, because of the significant scale differences in space and time, and, second, the derivative estimates improve with larger support.
Resumo:
The binocular perception of shape and depth relations between objects can change considerably if the viewing direction is changed only by a small angle. We explored this effect psychophysically and found a strong depth reduction effect for large disparity gradients. The effect is found to be strongest for horizontally oriented stimuli, and stronger for line stimuli than for points. This depth scaling effect is discussed in a computational framework of stereo based on a Baysian approach which allows integration of information from different types of matching primitives weighted according to their robustness.
Resumo:
Affine transformations are often used in recognition systems, to approximate the effects of perspective projection. The underlying mathematics is for exact feature data, with no positional uncertainty. In practice, heuristics are added to handle uncertainty. We provide a precise analysis of affine point matching, obtaining an expression for the range of affine-invariant values consistent with bounded uncertainty. This analysis reveals that the range of affine-invariant values depends on the actual $x$-$y$-positions of the features, i.e. with uncertainty, affine representations are not invariant with respect to the Cartesian coordinate system. We analyze the effect of this on geometric hashing and alignment recognition methods.
Resumo:
A polynomial time algorithm (pruned correspondence search, PCS) with good average case performance for solving a wide class of geometric maximal matching problems, including the problem of recognizing 3D objects from a single 2D image, is presented. Efficient verification algorithms, based on a linear representation of location constraints, are given for the case of affine transformations among vector spaces and for the case of rigid 2D and 3D transformations with scale. Some preliminary experiments suggest that PCS is a practical algorithm. Its similarity to existing correspondence based algorithms means that a number of existing techniques for speedup can be incorporated into PCS to improve its performance.
Resumo:
The task of shape recovery from a motion sequence requires the establishment of correspondence between image points. The two processes, the matching process and the shape recovery one, are traditionally viewed as independent. Yet, information obtained during the process of shape recovery can be used to guide the matching process. This paper discusses the mutual relationship between the two processes. The paper is divided into two parts. In the first part we review the constraints imposed on the correspondence by rigid transformations and extend them to objects that undergo general affine (non rigid) transformation (including stretch and shear), as well as to rigid objects with smooth surfaces. In all these cases corresponding points lie along epipolar lines, and these lines can be recovered from a small set of corresponding points. In the second part of the paper we discuss the potential use of epipolar lines in the matching process. We present an algorithm that recovers the correspondence from three contour images. The algorithm was implemented and used to construct object models for recognition. In addition we discuss how epipolar lines can be used to solve the aperture problem.
Resumo:
The correspondence problem in computer vision is basically a matching task between two or more sets of features. In this paper, we introduce a vectorized image representation, which is a feature-based representation where correspondence has been established with respect to a reference image. This representation has two components: (1) shape, or (x, y) feature locations, and (2) texture, defined as the image grey levels mapped onto the standard reference image. This paper explores an automatic technique for "vectorizing" face images. Our face vectorizer alternates back and forth between computation steps for shape and texture, and a key idea is to structure the two computations so that each one uses the output of the other. A hierarchical coarse-to-fine implementation is discussed, and applications are presented to the problems of facial feature detection and registration of two arbitrary faces.