951 resultados para Depth, logging


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Based on the principle and methods of carbonate sedimentology and reservoir geology, and guided by the theories of carbonate reservoir geology, the palaeokarst of Ordovician carbonate rocks in Tarim Basin has been comprehensively studied with multiple methods from different branches of geology. It is indicated that the features and distribution of palaeokarstification have developed in Ordovician carbonates. The controlling of karstification to Ordovician carbonate reservoirs has been discussed. Regional distribution of carbonate reservoirs controlled by karstification has been predicted within this basin. The main consents and conclusions of the this dissertation is as follows: Nine key indicators to the recognition of palaeokarst are proposed in terms of careful observation upon the well cores, lithological and geochemical analyses, and drilling and logging responses to the karst caves and fractures. The time and environment of cave filling are documented from careful research of lithofacies, mineralogy, and geochemistry of the physical and chemical fillings within karst caves. The caves in Ordovician carbonates were filled in Early Carboniferous in Lunnan area. The muddy filling in upper caves was deposited under subaerial fresh-water setting, while the muddy filling in lower caves was formed in the mixed water body of fresh-water and dominated sea water. Although most chemical fillings are suggested being precipated in the burial diagenetic environment after karstification but mineralogic and geochemical characteristics of some chemical fillings indicates they formed in meteoric environment during the karstification. It is obvious that the palaeokarst has been zoned in vertical profile. It can be divided into four units from top to bottom: surface karst, vadose karst, phreatic and tranquil flow zones. Between two types of limestone karst and dolostone karst are firstly differentiated in Tarim Basin, based on the comparison of features of each karst zone in limestone and dolostone regions. In Tabei area, the lowest depth of karstification is approximately 300 m below the Upper Ordovician unconformity interface, while the bottom depth of karstification in Tazhong area ranges commonly from 300 to 400 m, in rare cases may be up to 750 m below the upper Ordovician unconformity interface. In Lunnan and Tazhong areas, the palaeokarst morphology and the surface hydrosystem are firstly reconstructed based on the top of carboniferous "Shuangfeng limestone bed (Double-Peaks limestone)" as basal. According to the palaeomorphologic feature, karst topography can be divided into three units: karst upland, karst slope, and karst valley. Vadose zone was well developed in karst upland, and it can be found in a quite depth. Both vadose and phreatic zones were well developed in karst slope and upstream valley. In downstream valley, the karstification is not strong, the vadose and phreatic zones are thin in thickness. In Tazhong and Yingmaili areas, karstification is also developed in relict carbonate palaeo-hills which existed as isolated blocks admits clastic strata.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Based on the study of sequence stratigraphy, modern sedimentary, basin analysis, and petroleum system in Gubei depression, this paper builds high resolution sequence stratigraphic structure, sedimentary system, sandbody distribution, the effect of tectonic in sequence and sedimentary system evolution and model of tectonic-lithofacies. The pool formation mechanism of subtle trap is developed. There are some conclusions and views as follows. 1.With the synthetic sequence analysis of drilling, seismic, and well log, the highly resolution sequence structure is build in Gubei depression. They are divided two secondary sequences and seven three-order sequences in Shahejie formation. They are include 4 kinds of system traces and 7 kinds of sedimentary systems which are alluvial fan, under water fan, alluvial fan and fan-delta, fan-delta, lacustrine-fan, fluvial-delta-turbidite, lakeshore beach and bar, and deep lake system. Sandbody distribution is show base on third order sequence. 2.Based on a lot of experiment and well log, it is point out that there are many types of pore in reservoir with the styles of corrosion pore, weak cementing, matrix cementing, impure filling, and 7 kinds of diagenetic facies. These reservoirs are evaluated by lateral and profile characteristics of diagenetic facies and reservoir properties. 3.The effect of simultaneous faulting on sediment process is analyzed from abrupt slope, gentle slope, and hollow zone. The 4 kinds of tectonic lithofacies models are developed in several periods in Gubei depression; the regional distribution of subtle trap is predicted by hydro accumulation characteristics of different tectonic lithofacies. 4.There are 4 types of compacting process, which are normal compaction, abnormal high pressure, abnormal low pressure and complex abnormal pressure. The domain type is normal compaction that locates any area of depression, but normal high pressure is located only deep hollow zone (depth more than 3000m), abnormal low pressures are located gentle slope and faulted abrupt slope (depth between 1200~2500m). 5.Two types dynamic systems of pool formation (enclosed and partly enclosed system) are recognized. They are composed by which source rocks are from Es3 and Es4, cap rocks are deep lacustrine shale of Esl and Es3, and sandstone reservoirs are 7 kinds of sedimentary system in Es3 and Es4. According to theory of petroleum system, two petroleum systems are divided in Es3 and Es4 of Gubei depression, which are high or normal pressure self-source system and normal or low pressure external-source system. 6.There are 3 kinds of combination model of pool formation, the first is litholgical pool of inner depression (high or normal pressure self-source type), the second is fault block or fault nose pool in marginal of depression (normal type), the third is fault block-lithological pool of central low lifted block (high or normal pressure type). The lithological pool is located central of depression, other pool are located gentle or abrupt slope that are controlled by lithological, faulting, unconfirmed. 7.This paper raise a new technique and process of exploration subtle trap which include geological modeling, coring description and logging recognition, and well log constrained inversion. These are composed to method and theory of predicting subtle trap. Application these methods and techniques, 6 hydro objects are predicted in three zone of depression.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cross well seismic technique is a new type of geophysical method, which observes the seismic wave of the geologic body by placing both the source and receiver in the wells. By applying this method, it averted the absorption to high-frequency component of seismic signal caused by low weathering layers, thus, an extremely high-resolution seismic signal can be acquired. And extremely fine image of cross well formations, structure, and reservoir can be achieved as well. An integrated research is conducted to the high-frequency S-wave and P-wave data and some other data to determine the small faults, small structure and resolving the issues concerning the thin bed and reservoir's connectivity, fluid distribution, steam injection and fracture. This method connects the high-resolution surface seismic, logging and reservoir engineering. In this paper, based on the E & P situation in the oilfield and the theory of geophysical exploration, a research is conducted on cross well seismic technology in general and its important issues in cross well seismic technology in particular. A technological series of integrated field acquisition, data processing and interpretation and its integrated application research were developed and this new method can be applied to oilfield development and optimizing oilfield development scheme. The contents and results in this paper are as listed follows: An overview was given on the status quo and development of the cross well seismic method and problems concerning the cross well seismic technology and the difference in cross well seismic technology between China and international levels; And an analysis and comparison are given on foreign-made field data acquisition systems for cross-well seismic and pointed out the pros and cons of the field systems manufactured by these two foreign companies and this is highly valuable to import foreign-made cross well seismic field acquisition system for China. After analyses were conducted to the geometry design and field data for the cross well seismic method, a common wave field time-depth curve equation was derived and three types of pipe waves were discovered for the first time. Then, a research was conducted on the mechanism for its generation. Based on the wave field separation theory for cross well seismic method, we believe that different type of wave fields in different gather domain has different attributes characteristics, multiple methods (for instance, F-K filtering and median filtering) were applied in eliminating and suppressing the cross well disturbances and successfully separated the upgoing and downgoing waves and a satisfactory result has been achieved. In the area of wave field numerical simulation for cross well seismic method, a analysis was conducted on conventional ray tracing method and its shortcomings and proposed a minimum travel time ray tracing method based on Feraiat theory in this paper. This method is not only has high-speed calculation, but also with no rays enter into "dead end" or "blinded spot" after numerous iterations and it is become more adequate for complex velocity model. This is first time that the travel time interpolation has been brought into consideration, a dynamic ray tracing method with shortest possible path has been developed for the first arrivals of any complex mediums, such as transmission, diffraction and refraction, etc and eliminated the limitation for only traveling from one node to another node and increases the calculation accuracy for minimum travel time and ray tracing path and derives solution and corresponding edge conditions to the fourth-order differential sonic wave equation. The final step is to calculate cross well seismic synthetics for given source and receivers from multiple geological bodies. Thus, real cross-well seismic wave field can be recognized through scientific means and provides important foundation to guide the cross well seismic field geometry designing. A velocity tomographic inversion of the least square conjugated gradient method was developed for cross well seismic velocity tomopgraphic inversion and a modification has been made to object function of the old high frequency ray tracing method and put forward a thin bed oriented model for finite frequency velocity tomographic inversion method. As the theory model and results demonstrates that the method is simple and effective and is very important in seismic ray tomographic imaging for the complex geological body. Based on the characteristics of the cross well seismic algorithm, a processing flow for cross well seismic data processing has been built and optimized and applied to the production, a good section of velocity tomopgrphic inversion and cross well reflection imaging has been acquired. The cross well seismic data is acquired from the depth domain and how to interprets the depth domain data and retrieve the attributes is a brand new subject. After research was conducted on synthetics and trace integration from depth domain for the cross well seismic data interpretation, first of all, a research was conducted on logging constraint wave impedance of cross well seismic data and initially set up cross well seismic data interpretation flows. After it applied and interpreted to the cross well seismic data and a good geological results has been achieved in velocity tomographic inversion and reflection depth imaging and a lot of difficult problems for oilfield development has been resolved. This powerful, new method is good for oilfield development scheme optimization and increasing EOR. Based on conventional reservoir geological model building from logging data, a new method is also discussed on constraining the accuracy of reservoir geological model by applying the high resolution cross well seismic data and it has applied to Fan 124 project and a good results has been achieved which it presents a bight future for the cross well seismic technology.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Formation resistivity is one of the most important parameters to be evaluated in the evaluation of reservoir. In order to acquire the true value of virginal formation, various types of resistivity logging tools have been developed. However, with the increment of the proved reserves, the thickness of interest pay zone is becoming thinner and thinner, especially in the terrestrial deposit oilfield, so that electrical logging tools, limited by the contradictory requirements of resolution and investigation depth of this kinds of tools, can not provide the true value of the formation resistivity. Therefore, resitivity inversion techniques have been popular in the determination of true formation resistivity based on the improving logging data from new tools. In geophysical inverse problems, non-unique solution is inevitable due to the noisy data and deficient measurement information. I address this problem in my dissertation from three aspects, data acquisition, data processing/inversion and applications of the results/ uncertainty evaluation of the non-unique solution. Some other problems in the traditional inversion methods such as slowness speed of the convergence and the initial-correlation results. Firstly, I deal with the uncertainties in the data to be processed. The combination of micro-spherically focused log (MSFL) and dual laterolog(DLL) is the standard program to determine formation resistivity. During the inversion, the readings of MSFL are regarded as the resistivity of invasion zone of the formation after being corrected. However, the errors can be as large as 30 percent due to mud cake influence even if the rugose borehole effects on the readings of MSFL can be ignored. Furthermore, there still are argues about whether the two logs can be quantitatively used to determine formation resisitivities due to the different measurement principles. Thus, anew type of laterolog tool is designed theoretically. The new tool can provide three curves with different investigation depths and the nearly same resolution. The resolution is about 0.4meter. Secondly, because the popular iterative inversion method based on the least-square estimation can not solve problems more than two parameters simultaneously and the new laterolog logging tool is not applied to practice, my work is focused on two parameters inversion (radius of the invasion and the resistivty of virgin information ) of traditional dual laterolog logging data. An unequal weighted damp factors- revised method is developed to instead of the parameter-revised techniques used in the traditional inversion method. In this new method, the parameter is revised not only dependency on the damp its self but also dependency on the difference between the measurement data and the fitting data in different layers. At least 2 iterative numbers are reduced than the older method, the computation cost of inversion is reduced. The damp least-squares inversion method is the realization of Tikhonov's tradeoff theory on the smooth solution and stability of inversion process. This method is realized through linearity of non-linear inversion problem which must lead to the dependency of solution on the initial value of parameters. Thus, severe debates on efficiency of this kinds of methods are getting popular with the developments of non-linear processing methods. The artificial neural net method is proposed in this dissertation. The database of tool's response to formation parameters is built through the modeling of the laterolog tool and then is used to training the neural nets. A unit model is put forward to simplify the dada space and an additional physical limitation is applied to optimize the net after the cross-validation method is done. Results show that the neural net inversion method could replace the traditional inversion method in a single formation and can be used a method to determine the initial value of the traditional method. No matter what method is developed, the non-uniqueness and uncertainties of the solution could be inevitable. Thus, it is wise to evaluate the non-uniqueness and uncertainties of the solution in the application of inversion results. Bayes theorem provides a way to solve such problems. This method is illustrately discussed in a single formation and achieve plausible results. In the end, the traditional least squares inversion method is used to process raw logging data, the calculated oil saturation increased 20 percent than that not be proceed compared to core analysis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The binocular perception of shape and depth relations between objects can change considerably if the viewing direction is changed only by a small angle. We explored this effect psychophysically and found a strong depth reduction effect for large disparity gradients. The effect is found to be strongest for horizontally oriented stimuli, and stronger for line stimuli than for points. This depth scaling effect is discussed in a computational framework of stereo based on a Baysian approach which allows integration of information from different types of matching primitives weighted according to their robustness.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The inferior temporal cortex (IT) of monkeys is thought to play an essential role in visual object recognition. Inferotemporal neurons are known to respond to complex visual stimuli, including patterns like faces, hands, or other body parts. What is the role of such neurons in object recognition? The present study examines this question in combined psychophysical and electrophysiological experiments, in which monkeys learned to classify and recognize novel visual 3D objects. A population of neurons in IT were found to respond selectively to such objects that the monkeys had recently learned to recognize. A large majority of these cells discharged maximally for one view of the object, while their response fell off gradually as the object was rotated away from the neuron"s preferred view. Most neurons exhibited orientation-dependent responses also during view-plane rotations. Some neurons were found tuned around two views of the same object, while a very small number of cells responded in a view- invariant manner. For five different objects that were extensively used during the training of the animals, and for which behavioral performance became view-independent, multiple cells were found that were tuned around different views of the same object. No selective responses were ever encountered for views that the animal systematically failed to recognize. The results of our experiments suggest that neurons in this area can develop a complex receptive field organization as a consequence of extensive training in the discrimination and recognition of objects. Simple geometric features did not appear to account for the neurons" selective responses. These findings support the idea that a population of neurons -- each tuned to a different object aspect, and each showing a certain degree of invariance to image transformations -- may, as an assembly, encode complex 3D objects. In such a system, several neurons may be active for any given vantage point, with a single unit acting like a blurred template for a limited neighborhood of a single view.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This report addresses the problem of fault tolerance to system failures for database systems that are to run on highly concurrent computers. It assumes that, in general, an application may have a wide distribution in the lifetimes of its transactions. Logging remains the method of choice for ensuring fault tolerance. Generational garbage collection techniques manage the limited disk space reserved for log information; this technique does not require periodic checkpoints and is well suited for applications with a broad range of transaction lifetimes. An arbitrarily large collection of parallel log streams provide the necessary disk bandwidth.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Reconstructing a surface from sparse sensory data is a well known problem in computer vision. Early vision modules typically supply sparse depth, orientation and discontinuity information. The surface reconstruction module incorporates these sparse and possibly conflicting measurements of a surface into a consistent, dense depth map. The coupled depth/slope model developed here provides a novel computational solution to the surface reconstruction problem. This method explicitly computes dense slope representation as well as dense depth representations. This marked change from previous surface reconstruction algorithms allows a natural integration of orientation constraints into the surface description, a feature not easily incorporated into earlier algorithms. In addition, the coupled depth/ slope model generalizes to allow for varying amounts of smoothness at different locations on the surface. This computational model helps conceptualize the problem and leads to two possible implementations- analog and digital. The model can be implemented as an electrical or biological analog network since the only computations required at each locally connected node are averages, additions and subtractions. A parallel digital algorithm can be derived by using finite difference approximations. The resulting system of coupled equations can be solved iteratively on a mesh-pf-processors computer, such as the Connection Machine. Furthermore, concurrent multi-grid methods are designed to speed the convergence of this digital algorithm.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We show that if a language is recognized within certain error bounds by constant-depth quantum circuits over a finite family of gates, then it is computable in (classical) polynomial time. In particular, our results imply EQNC^0 ⊆ P, where EQNC^0 is the constant-depth analog of the class EQP. On the other hand, we adapt and extend ideas of Terhal and DiVincenzo [?] to show that, for any family

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The proliferation of inexpensive workstations and networks has created a new era in distributed computing. At the same time, non-traditional applications such as computer-aided design (CAD), computer-aided software engineering (CASE), geographic-information systems (GIS), and office-information systems (OIS) have placed increased demands for high-performance transaction processing on database systems. The combination of these factors gives rise to significant challenges in the design of modern database systems. In this thesis, we propose novel techniques whose aim is to improve the performance and scalability of these new database systems. These techniques exploit client resources through client-based transaction management. Client-based transaction management is realized by providing logging facilities locally even when data is shared in a global environment. This thesis presents several recovery algorithms which utilize client disks for storing recovery related information (i.e., log records). Our algorithms work with both coarse and fine-granularity locking and they do not require the merging of client logs at any time. Moreover, our algorithms support fine-granularity locking with multiple clients permitted to concurrently update different portions of the same database page. The database state is recovered correctly when there is a complex crash as well as when the updates performed by different clients on a page are not present on the disk version of the page, even though some of the updating transactions have committed. This thesis also presents the implementation of the proposed algorithms in a memory-mapped storage manager as well as a detailed performance study of these algorithms using the OO1 database benchmark. The performance results show that client-based logging is superior to traditional server-based logging. This is because client-based logging is an effective way to reduce dependencies on server CPU and disk resources and, thus, prevents the server from becoming a performance bottleneck as quickly when the number of clients accessing the database increases.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Small depth quantum circuits have proved to be unexpectedly powerful in comparison to their classical counterparts. We survey some of the recent work on this and present some open problems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This article applies a recent theory of 3-D biological vision, called FACADE Theory, to explain several percepts which Kanizsa pioneered. These include 3-D pop-out of an occluding form in front of an occluded form, leading to completion and recognition of the occluded form; 3-D transparent and opaque percepts of Kanizsa squares, with and without Varin wedges; and interactions between percepts of illusory contours, brightness, and depth in response to 2-D Kanizsa images. These explanations clarify how a partially occluded object representation can be completed for purposes of object recognition, without the completed part of the representation necessarily being seen. The theory traces these percepts to neural mechanisms that compensate for measurement uncertainty and complementarity at individual cortical processing stages by using parallel and hierarchical interactions among several cortical processing stages. These interactions are modelled by a Boundary Contour System (BCS) that generates emergent boundary segmentations and a complementary Feature Contour System (FCS) that fills-in surface representations of brightness, color, and depth. The BCS and FCS interact reciprocally with an Object Recognition System (ORS) that binds BCS boundary and FCS surface representations into attentive object representations. The BCS models the parvocellular LGN→Interblob→Interstripe→V4 cortical processing stream, the FCS models the parvocellular LGN→Blob→Thin Stripe→V4 cortical processing stream, and the ORS models inferotemporal cortex.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

When we look at a scene, how do we consciously see surfaces infused with lightness and color at the correct depths? Random Dot Stereograms (RDS) probe how binocular disparity between the two eyes can generate such conscious surface percepts. Dense RDS do so despite the fact that they include multiple false binocular matches. Sparse stereograms do so even across large contrast-free regions with no binocular matches. Stereograms that define occluding and occluded surfaces lead to surface percepts wherein partially occluded textured surfaces are completed behind occluding textured surfaces at a spatial scale much larger than that of the texture elements themselves. Earlier models suggest how the brain detects binocular disparity, but not how RDS generate conscious percepts of 3D surfaces. A neural model predicts how the layered circuits of visual cortex generate these 3D surface percepts using interactions between visual boundary and surface representations that obey complementary computational rules.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background. Thoracic epidural catheters provide the best quality postoperative pain relief for major abdominal and thoracic surgical procedures, but placement is one of the most challenging procedures in the repertoire of an anesthesiologist. Most patients presenting for a procedure that would benefit from a thoracic epidural catheter have already had high resolution imaging that may be useful to assist placement of a catheter. Methods. This retrospective study used data from 168 patients to examine the association and predictive power of epidural-skin distance (ESD) on computed tomography (CT) to determine loss of resistance depth acquired during epidural placement. Additionally, the ability of anesthesiologists to measure this distance was compared to a radiologist, who specializes in spine imaging. Results. There was a strong association between CT measurement and loss of resistance depth (P < 0.0001); the presence of morbid obesity (BMI > 35) changed this relationship (P = 0.007). The ability of anesthesiologists to make CT measurements was similar to a gold standard radiologist (all individual ICCs > 0.9). Conclusions. Overall, this study supports the examination of a recent CT scan to aid in the placement of a thoracic epidural catheter. Making use of these scans may lead to faster epidural placements, fewer accidental dural punctures, and better epidural blockade.