823 resultados para Business -- Data processing -- Management


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Formation resistivity is one of the most important parameters to be evaluated in the evaluation of reservoir. In order to acquire the true value of virginal formation, various types of resistivity logging tools have been developed. However, with the increment of the proved reserves, the thickness of interest pay zone is becoming thinner and thinner, especially in the terrestrial deposit oilfield, so that electrical logging tools, limited by the contradictory requirements of resolution and investigation depth of this kinds of tools, can not provide the true value of the formation resistivity. Therefore, resitivity inversion techniques have been popular in the determination of true formation resistivity based on the improving logging data from new tools. In geophysical inverse problems, non-unique solution is inevitable due to the noisy data and deficient measurement information. I address this problem in my dissertation from three aspects, data acquisition, data processing/inversion and applications of the results/ uncertainty evaluation of the non-unique solution. Some other problems in the traditional inversion methods such as slowness speed of the convergence and the initial-correlation results. Firstly, I deal with the uncertainties in the data to be processed. The combination of micro-spherically focused log (MSFL) and dual laterolog(DLL) is the standard program to determine formation resistivity. During the inversion, the readings of MSFL are regarded as the resistivity of invasion zone of the formation after being corrected. However, the errors can be as large as 30 percent due to mud cake influence even if the rugose borehole effects on the readings of MSFL can be ignored. Furthermore, there still are argues about whether the two logs can be quantitatively used to determine formation resisitivities due to the different measurement principles. Thus, anew type of laterolog tool is designed theoretically. The new tool can provide three curves with different investigation depths and the nearly same resolution. The resolution is about 0.4meter. Secondly, because the popular iterative inversion method based on the least-square estimation can not solve problems more than two parameters simultaneously and the new laterolog logging tool is not applied to practice, my work is focused on two parameters inversion (radius of the invasion and the resistivty of virgin information ) of traditional dual laterolog logging data. An unequal weighted damp factors- revised method is developed to instead of the parameter-revised techniques used in the traditional inversion method. In this new method, the parameter is revised not only dependency on the damp its self but also dependency on the difference between the measurement data and the fitting data in different layers. At least 2 iterative numbers are reduced than the older method, the computation cost of inversion is reduced. The damp least-squares inversion method is the realization of Tikhonov's tradeoff theory on the smooth solution and stability of inversion process. This method is realized through linearity of non-linear inversion problem which must lead to the dependency of solution on the initial value of parameters. Thus, severe debates on efficiency of this kinds of methods are getting popular with the developments of non-linear processing methods. The artificial neural net method is proposed in this dissertation. The database of tool's response to formation parameters is built through the modeling of the laterolog tool and then is used to training the neural nets. A unit model is put forward to simplify the dada space and an additional physical limitation is applied to optimize the net after the cross-validation method is done. Results show that the neural net inversion method could replace the traditional inversion method in a single formation and can be used a method to determine the initial value of the traditional method. No matter what method is developed, the non-uniqueness and uncertainties of the solution could be inevitable. Thus, it is wise to evaluate the non-uniqueness and uncertainties of the solution in the application of inversion results. Bayes theorem provides a way to solve such problems. This method is illustrately discussed in a single formation and achieve plausible results. In the end, the traditional least squares inversion method is used to process raw logging data, the calculated oil saturation increased 20 percent than that not be proceed compared to core analysis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis mainly talks about the wavelet transfrom and the frequency division method. It describes the frequency division processing on prestack or post-stack seismic data and application of inversion noise attenuation, frequency division residual static correction and high resolution data in reservoir inversion. This thesis not only describes the frequency division and inversion in theory, but also proves it by model calculation. All the methods are integrated together. The actual data processing demonstrates the applying results. This thesis analyzes the differences and limitation between t-x prediction filter and f-x prediction filter noise attenuation from wavelet transform theory. It considers that we can do the frequency division attenuation process of noise and signal by wavelet frequency division theory according to the differences of noise and signal in phase, amplitude and frequency. By comparison with the f-x coherence noise, removal method, it approves the effects and practicability of frequency division in coherence and random noise isolation. In order to solve the side effects in non-noise area, we: take the area constraint method and only apply the frequency division processing in the noise area. So it can solve the problem of low frequency loss in non-noise area. The residual moveout differences in seismic data processing have a great effect on stack image and resolutions. Different frequency components have different residual moveout differences. The frequency division residual static correction realizes the frequency division and the calculation of residual correction magnitude. It also solves the problems of different residual correction magnitude in different frequency and protects the high frequency information in data. By actual data processing, we can get good results in phase residual moveout differences elimination of pre-stack data, stack image quality and improvement of data resolution. This thesis analyses the characters of the random noises and its descriptions in time domain and frequency domain. Furthermore it gives the inversion prediction solution methods and realizes the frequency division inversion attenuation of the random noise. By the analysis of results of the actual data processing, we show that the noise removed by inversion has its own advantages. By analyzing parameter's about resolution and technology of high resolution data processing, this thesis describes the relations between frequency domain and resolution, parameters about resolution and methods to increase resolution. It also gives the processing flows of the high resolution data; the effect and influence of reservoir inversion caused by high resolution data. Finally it proves the accuracy and precision of the reservoir inversion results. The research results of this thesis reveal that frequency division noise attenuation, frequency residual correction and inversion noise attenuation are effective methods to increase the SNR and resolution of seismic data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

To deal with some key problems in multi-component seismic exploration, some methods are introduced in this thesis based on reading amounts of papers about multi-component seismic theories and methods. First, to find a solution for the detection of the fracture density and orientation in igneous, carbonate and shale reservoirs, a large amount of which exist in domestic oil fields with low exploration and development degree, a new fast and slow shear waves separation method called Ratio Method based on S-wave splitting theory is discussed in this thesis, through which the anisotropy coefficient as well as fracture parameters such as density and azimuthal angle can be acquired. Another main point in this thesis involves the application of seismic velocity ratio (Vp/Vs) to predict the Hthological parameters of subsurface medium. To deal with the unfeasibility of velocity ratio calculation method based on time ratio due to the usually low single-noise ratio of S-wave seismic data acquired on land, a new method based on detailed velocity analysis is introduced. Third, pre-stack Kirchhoff integral migration is a new method developed in recent years, through which both S and P component seismic data as well as amplitude ratio of P/S waves can be acquired. In this thesis, the research on untilizing the P and S wave sections as well as amplitude ratio sections to interpret low-amplitude structures and lithological traps is carried out. The fast and slow shear wave separation method is then be applied respectively to detect the density and azimuthal angle of fractures in an igneous rock gas reservoir and the coal formation in a coal field. Two velocity ratio-calculating methods are applied respectively in the lithological prediction at the gas and coal field after summarizing a large amount of experimental results draw domestically and abroad. P and S wave sections as well as amplitude ratio sections are used to identify low-amplitude structures and lithological traps in the slope area of a oil-bearing sedimentary basin. The calculated data concerning fracture density and azimuthal angle through the introduced method matches well with the regional stress and actual drilling data. The predicted lithological data reflects the actual drilling data. Some of the low-amplitude and lithological traps determined by Kirchhoff migration method are verified by the actual drilling data. These results indicate that these methods are very meaningful when dealing with complex oil and gas reservoir, and can be applied in other areas.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the development of oil and gas exploration, the exploration of the continental oil and gas turns into the exploration of the subtle oil and gas reservoirs from the structural oil and gas reservoirs in China. The reserves of the found subtle oil and gas reservoirs account for more than 60 percent of the in the discovered oil and gas reserves. Exploration of the subtle oil and gas reservoirs is becoming more and more important and can be taken as the main orientation for the increase of the oil and gas reserves. The characteristics of the continental sedimentary facies determine the complexities of the lithological exploration. Most of the continental rift basins in East China have entered exploration stages of medium and high maturity. Although the quality of the seismic data is relatively good, this areas have the characteristics of the thin sand thickness, small faults, small range of the stratum. It requests that the seismic data have high resolution. It is a important task how to improve the signal/noise ratio of the high frequency of seismic data. In West China, there are the complex landforms, the deep embedding the targets of the prospecting, the complex geological constructs, many ruptures, small range of the traps, the low rock properties, many high pressure stratums and difficulties of boring well. Those represent low signal/noise ratio and complex kinds of noise in the seismic records. This needs to develop the method and technique of the noise attenuation in the data acquisition and processing. So that, oil and gas explorations need the high resolution technique of the geophysics in order to solve the implementation of the oil resources strategy for keep oil production and reserves stable in Ease China and developing the crude production and reserves in West China. High signal/noise ratio of seismic data is the basis. It is impossible to realize for the high resolution and high fidelity without the high signal/noise ratio. We play emphasis on many researches based on the structure analysis for improving signal/noise ratio of the complex areas. Several methods are put forward for noise attenuation to truly reflect the geological features. Those can reflect the geological structures, keep the edges of geological construction and improve the identifications of the oil and gas traps. The ideas of emphasize the foundation, give prominence to innovate, and pay attention to application runs through the paper. The dip-scanning method as the center of the scanned point inevitably blurs the edges of geological features, such as fault and fractures. We develop the new dip scanning method in the shap of end with two sides scanning to solve this problem. We bring forward the methods of signal estimation with the coherence, seismic wave characteristc with coherence, the most homogeneous dip-sanning for the noise attenuation using the new dip-scanning method. They can keep the geological characters, suppress the random noise and improve the s/n ratio and resolution. The rutine dip-scanning is in the time-space domain. Anew method of dip-scanning in the frequency-wavenumber domain for the noise attenuation is put forward. It use the quality of distinguishing between different dip events of the reflection in f-k domain. It can reduce the noise and gain the dip information. We describe a methodology for studying and developing filtering methods based on differential equations. It transforms the filtering equations in the frequency domain or the f-k domain into time or time-space domains, and uses a finite-difference algorithm to solve these equations. This method does not require that seismic data be stationary, so their parameters can vary at every temporal and spatial point. That enhances the adaptability of the filter. It is computationally efficient. We put forward a method of matching pursuits for the noise suppression. This method decomposes any signal into a linear expansion of waveforms that are selected from a redundant dictionary of functions. These waveforms are chosen in order to best match the signal structures. It can extract the effective signal from the noisy signal and reduce the noise. We introduce the beamforming filtering method for the noise elimination. Real seismic data processing shows that it is effective in attenuating multiples and internal multiples. The s/n ratio and resolution are improved. The effective signals have the high fidelity. Through calculating in the theoretic model and applying it to the real seismic data processing, it is proved that the methods in this paper can effectively suppress the random noise, eliminate the cohence noise, and improve the resolution of the seismic data. Their practicability is very better. And the effect is very obvious.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The reflection seismic prospecting technique is an important and a widely used method in the petroleum and coal surveying, and has been developed to a perfectly mature technique from the aspects of data acquisition, data processing to data interpretation. However, the metallic mine seismic prospecting, especially the high resolution seismic prospecting technique are being still in the course of studying and probing up to now. In this paper, the basic theory and the present situation of study on metallic mine seismic reflection are expatiated, the basic theory, the improving measure, the converging velocity and the ability on the integrating global optimization method are also illuminated in detail at first. Then the basic theory, the realization process and the practicing effects of the vector suppressing noise algorithm are also introduced. On the basis of studying of applying the integrating global optimization method to static correction and the vector suppressing noise algorithm, we elaborate processed the seismic data of Tongling metallic mine. We introduced the processing flow, the key steps and the processing effects. Basing on the processing results, we analyzed the major reflection characteristics, the geological interpretation results and the earth's crust top reflection structure and the space distribution status of Wutong set, the space shape of part lithological body and the contacting relations of horizonsunveiled.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The content of this paper is based on the research work while the author took part in the key project of NSFC and the key project of Knowledge Innovation of CAS. The whole paper is expanded by introduction of the inevitable boundary problem during seismic migration and inversion. Boundary problem is a popular issue in seismic data processing. At the presence of artificial boundary, reflected wave which does not exist in reality comes to presence when the incident seismic wave arrives at the artificial boundary. That will interfere the propagation of seismic wave and cause alias information on the processed profile. Furthermore, the quality of the whole seismic profile will decrease and the subsequent work will fail.This paper has also made a review on the development of seismic migration, expatiated temporary seismic migration status and predicted the possible break through. Aiming at the absorbing boundary problem in migration, we have deduced the wide angle absorbing boundary condition and made a compare with the boundary effect of Toepiitz matrix fast approximate computation.During the process of fast approximate inversion computation of Toepiitz system, we have introduced the pre-conditioned conjugate gradient method employing co circulant extension to construct pre-conditioned matrix. Especially, employment of combined preconditioner will reduce the boundary effect during computation.Comparing the boundary problem in seismic migration with that in Toepiitz matrix inversion we find that the change of boundary condition will lead to the change of coefficient matrix eigenvalues and the change of coefficient matrix eigenvalues will cause boundary effect. In this paper, the author has made an qualitative analysis of the relationship between the coefficient matrix eigenvalues and the boundary effect. Quantitative analysis is worthy of further research.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Comprehensive two-dimensional gas chromatography (GC x GC) has attracted much attention for the analys is of complex samples. Even with a large peak capacity in GC x GC, peak overlapping is often met. In this paper, a new method was developed to resolve overlapped peaks based on the mass conservation and the exponentially modified Gaussian (EMG) model. Linear relationships between the calculated sigma, tau of primary peaks with the corresponding retention time (t(R)) were obtained, and the correlation coefficients were over 0.99. Based on such relationships, the elution profile of each compound in overlapped peaks could be simulated, even for the peak never separated on the second-dimension. The proposed method has proven to offer more accurate peak area than the general data processing method. (c) 2005 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Two complementary wireless sensor nodes for building two-tiered heterogeneous networks are presented. A larger node with a 25 mm by 25 mm size acts as the backbone of the network, and can handle complex data processing. A smaller, cheaper node with a 10 mm by 10 mm size can perform simpler sensor-interfacing tasks. The 25mm node is based on previous work that has been done in the Tyndall National Institute that created a modular wireless sensor node. In this work, a new 25mm module is developed operating in the 433/868 MHz frequency bands, with a range of 3.8 km. The 10mm node is highly miniaturised, while retaining a high level of modularity. It has been designed to support very energy efficient operation for applications with low duty cycles, with a sleep current of 3.3 μA. Both nodes use commercially available components and have low manufacturing costs to allow the construction of large networks. In addition, interface boards for communicating with nodes have been developed for both the 25mm and 10mm nodes. These interface boards provide a USB connection, and support recharging of a Li-ion battery from the USB power supply. This paper discusses the design goals, the design methods, and the resulting implementation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Body Sensor Network (BSN) technology is seeing a rapid emergence in application areas such as health, fitness and sports monitoring. Current BSN wireless sensors typically operate on a single frequency band (e.g. utilizing the IEEE 802.15.4 standard that operates at 2.45GHz) employing a single radio transceiver for wireless communications. This allows a simple wireless architecture to be realized with low cost and power consumption. However, network congestion/failure can create potential issues in terms of reliability of data transfer, quality-of-service (QOS) and data throughput for the sensor. These issues can be especially critical in healthcare monitoring applications where data availability and integrity is crucial. The addition of more than one radio has the potential to address some of the above issues. For example, multi-radio implementations can allow access to more than one network, providing increased coverage and data processing as well as improved interoperability between networks. A small number of multi-radio wireless sensor solutions exist at present but require the use of more than one radio transceiver devices to achieve multi-band operation. This paper presents the design of a novel prototype multi-radio hardware platform that uses a single radio transceiver. The proposed design allows multi-band operation in the 433/868MHz ISM bands and this, together with its low complexity and small form factor, make it suitable for a wide range of BSN applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Buried heat sources can be investigated by examining thermal infrared images and comparing these with the results of theoretical models which predict the thermal anomaly a given heat source may generate. Key factors influencing surface temperature include the geometry and temperature of the heat source, the surface meteorological environment, and the thermal conductivity and anisotropy of the rock. In general, a geothermal heat flux of greater than 2% of solar insolation is required to produce a detectable thermal anomaly in a thermal infrared image. A heat source of, for example, 2-300K greater than the average surface temperature must be a t depth shallower than 50m for the detection of the anomaly in a thermal infrared image, for typical terrestrial conditions. Atmospheric factors are of critical importance. While the mean atmospheric temperature has little significance, the convection is a dominant factor, and can act to swamp the thermal signature entirely. Given a steady state heat source that produces a detectable thermal anomaly, it is possible to loosely constrain the physical properties of the heat source and surrounding rock, using the surface thermal anomaly as a basis. The success of this technique is highly dependent on the degree to which the physical properties of the host rock are known. Important parameters include the surface thermal properties and thermal conductivity of the rock. Modelling of transient thermal situations was carried out, to assess the effect of time dependant thermal fluxes. One-dimensional finite element models can be readily and accurately applied to the investigation of diurnal heat flow, as with thermal inertia models. Diurnal thermal models of environments on Earth, the Moon and Mars were carried out using finite elements and found to be consistent with published measurements. The heat flow from an injection of hot lava into a near surface lava tube was considered. While this approach was useful for study, and long term monitoring in inhospitable areas, it was found to have little hazard warning utility, as the time taken for the thermal energy to propagate to the surface in dry rock (several months) in very long. The resolution of the thermal infrared imaging system is an important factor. Presently available satellite based systems such as Landsat (resolution of 120m) are inadequate for detailed study of geothermal anomalies. Airborne systems, such as TIMS (variable resolution of 3-6m) are much more useful for discriminating small buried heat sources. Planned improvements in the resolution of satellite based systems will broaden the potential for application of the techniques developed in this thesis. It is important to note, however, that adequate spatial resolution is a necessary but not sufficient condition for successful application of these techniques.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work considers the static calculation of a program’s average-case time. The number of systems that currently tackle this research problem is quite small due to the difficulties inherent in average-case analysis. While each of these systems make a pertinent contribution, and are individually discussed in this work, only one of them forms the basis of this research. That particular system is known as MOQA. The MOQA system consists of the MOQA language and the MOQA static analysis tool. Its technique for statically determining average-case behaviour centres on maintaining strict control over both the data structure type and the labeling distribution. This research develops and evaluates the MOQA language implementation, and adds to the functions already available in this language. Furthermore, the theory that backs MOQA is generalised and the range of data structures for which the MOQA static analysis tool can determine average-case behaviour is increased. Also, some of the MOQA applications and extensions suggested in other works are logically examined here. For example, the accuracy of classifying the MOQA language as reversible is investigated, along with the feasibility of incorporating duplicate labels into the MOQA theory. Finally, the analyses that take place during the course of this research reveal some of the MOQA strengths and weaknesses. This thesis aims to be pragmatic when evaluating the current MOQA theory, the advancements set forth in the following work and the benefits of MOQA when compared to similar systems. Succinctly, this work’s significant expansion of the MOQA theory is accompanied by a realistic assessment of MOQA’s accomplishments and a serious deliberation of the opportunities available to MOQA in the future.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper describes implementations of two mobile cloud applications, file synchronisation and intensive data processing, using the Context Aware Mobile Cloud Services middleware, and the Cloud Personal Assistant. Both are part of the same mobile cloud project, actively developed and currently at the second version. We describe recent changes to the middleware, along with our experimental results of the two application models. We discuss challenges faced during the development of the middleware and their implications. The paper includes performance analysis of the CPA support for the two applications in respect to existing solutions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Long reach passive optical networks (LR-PONs), which integrate fibre-to-the-home with metro networks, have been the subject of intensive research in recent years and are considered one of the most promising candidates for the next generation of optical access networks. Such systems ideally have reaches greater than 100km and bit rates of at least 10Gb/s per wavelength in the downstream and upstream directions. Due to the limited equipment sharing that is possible in access networks, the laser transmitters in the terminal units, which are usually the most expensive components, must be as cheap as possible. However, the requirement for low cost is generally incompatible with the need for a transmitter chirp characteristic that is optimised for such long reaches at 10Gb/s, and hence dispersion compensation is required. In this thesis electronic dispersion compensation (EDC) techniques are employed to increase the chromatic dispersion tolerance and to enhance the system performance at the expense of moderate additional implementation complexity. In order to use such EDC in LR-PON architectures, a number of challenges associated with the burst-mode nature of the upstream link need to be overcome. In particular, the EDC must be made adaptive from one burst to the next (burst-mode EDC, or BM-EDC) in time scales on the order of tens to hundreds of nanoseconds. Burst-mode operation of EDC has received little attention to date. The main objective of this thesis is to demonstrate the feasibility of such a concept and to identify the key BM-EDC design parameters required for applications in a 10Gb/s burst-mode link. This is achieved through a combination of simulations and transmission experiments utilising off-line data processing. The research shows that burst-to-burst adaptation can in principle be implemented efficiently, opening the possibility of low overhead, adaptive EDC-enabled burst-mode systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In many important high-technology markets, including software development, data processing, communications, aeronautics, and defense, suppliers learn through experience how to provide better service at lower cost. This paper examines how a buyer designs dynamic competition among rival suppliers to exploit learning economies while minimizing the costs of becoming locked in to one producer. Strategies for controlling dynamic competition include the handicapping of more efficient suppliers in procurement competitions, the protection and allocation of intellectual property, and the sharing of information among rival suppliers. (JEL C73, D44, L10).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: Palliative medicine has made rapid progress in establishing its scientific and clinical legitimacy, yet the evidence base to support clinical practice remains deficient in both the quantity and quality of published studies. Historically, the conduct of research in palliative care populations has been impeded by multiple barriers including health care system fragmentation, small number and size of potential sites for recruitment, vulnerability of the population, perceptions of inappropriateness, ethical concerns, and gate-keeping. METHODS: A group of experienced investigators with backgrounds in palliative care research convened to consider developing a research cooperative group as a mechanism for generating high-quality evidence on prioritized, clinically relevant topics in palliative care. RESULTS: The resulting Palliative Care Research Cooperative (PCRC) agreed on a set of core principles: active, interdisciplinary membership; commitment to shared research purposes; heterogeneity of participating sites; development of research capacity in participating sites; standardization of methodologies, such as consenting and data collection/management; agile response to research requests from government, industry, and investigators; focus on translation; education and training of future palliative care researchers; actionable results that can inform clinical practice and policy. Consensus was achieved on a first collaborative study, a randomized clinical trial of statin discontinuation versus continuation in patients with a prognosis of less than 6 months who are taking statins for primary or secondary prevention. This article describes the formation of the PCRC, highlighting processes and decisions taken to optimize the cooperative group's success.