887 resultados para DATA-ACQUISITION SYSTEM
Resumo:
Cross well seismic technique is a new type of geophysical method, which observes the seismic wave of the geologic body by placing both the source and receiver in the wells. By applying this method, it averted the absorption to high-frequency component of seismic signal caused by low weathering layers, thus, an extremely high-resolution seismic signal can be acquired. And extremely fine image of cross well formations, structure, and reservoir can be achieved as well. An integrated research is conducted to the high-frequency S-wave and P-wave data and some other data to determine the small faults, small structure and resolving the issues concerning the thin bed and reservoir's connectivity, fluid distribution, steam injection and fracture. This method connects the high-resolution surface seismic, logging and reservoir engineering. In this paper, based on the E & P situation in the oilfield and the theory of geophysical exploration, a research is conducted on cross well seismic technology in general and its important issues in cross well seismic technology in particular. A technological series of integrated field acquisition, data processing and interpretation and its integrated application research were developed and this new method can be applied to oilfield development and optimizing oilfield development scheme. The contents and results in this paper are as listed follows: An overview was given on the status quo and development of the cross well seismic method and problems concerning the cross well seismic technology and the difference in cross well seismic technology between China and international levels; And an analysis and comparison are given on foreign-made field data acquisition systems for cross-well seismic and pointed out the pros and cons of the field systems manufactured by these two foreign companies and this is highly valuable to import foreign-made cross well seismic field acquisition system for China. After analyses were conducted to the geometry design and field data for the cross well seismic method, a common wave field time-depth curve equation was derived and three types of pipe waves were discovered for the first time. Then, a research was conducted on the mechanism for its generation. Based on the wave field separation theory for cross well seismic method, we believe that different type of wave fields in different gather domain has different attributes characteristics, multiple methods (for instance, F-K filtering and median filtering) were applied in eliminating and suppressing the cross well disturbances and successfully separated the upgoing and downgoing waves and a satisfactory result has been achieved. In the area of wave field numerical simulation for cross well seismic method, a analysis was conducted on conventional ray tracing method and its shortcomings and proposed a minimum travel time ray tracing method based on Feraiat theory in this paper. This method is not only has high-speed calculation, but also with no rays enter into "dead end" or "blinded spot" after numerous iterations and it is become more adequate for complex velocity model. This is first time that the travel time interpolation has been brought into consideration, a dynamic ray tracing method with shortest possible path has been developed for the first arrivals of any complex mediums, such as transmission, diffraction and refraction, etc and eliminated the limitation for only traveling from one node to another node and increases the calculation accuracy for minimum travel time and ray tracing path and derives solution and corresponding edge conditions to the fourth-order differential sonic wave equation. The final step is to calculate cross well seismic synthetics for given source and receivers from multiple geological bodies. Thus, real cross-well seismic wave field can be recognized through scientific means and provides important foundation to guide the cross well seismic field geometry designing. A velocity tomographic inversion of the least square conjugated gradient method was developed for cross well seismic velocity tomopgraphic inversion and a modification has been made to object function of the old high frequency ray tracing method and put forward a thin bed oriented model for finite frequency velocity tomographic inversion method. As the theory model and results demonstrates that the method is simple and effective and is very important in seismic ray tomographic imaging for the complex geological body. Based on the characteristics of the cross well seismic algorithm, a processing flow for cross well seismic data processing has been built and optimized and applied to the production, a good section of velocity tomopgrphic inversion and cross well reflection imaging has been acquired. The cross well seismic data is acquired from the depth domain and how to interprets the depth domain data and retrieve the attributes is a brand new subject. After research was conducted on synthetics and trace integration from depth domain for the cross well seismic data interpretation, first of all, a research was conducted on logging constraint wave impedance of cross well seismic data and initially set up cross well seismic data interpretation flows. After it applied and interpreted to the cross well seismic data and a good geological results has been achieved in velocity tomographic inversion and reflection depth imaging and a lot of difficult problems for oilfield development has been resolved. This powerful, new method is good for oilfield development scheme optimization and increasing EOR. Based on conventional reservoir geological model building from logging data, a new method is also discussed on constraining the accuracy of reservoir geological model by applying the high resolution cross well seismic data and it has applied to Fan 124 project and a good results has been achieved which it presents a bight future for the cross well seismic technology.
Resumo:
At first, the article has an introduction of the basic theory of magnetotelluric and the essential methods of data acquisition and preprocessing. After that, the article introduces the methods together with their predominance of computing transfering function such as the Least-square method, the Remote-Reference method and the Robust method. The article also describe the cause and influence of static shift, and has a summarize of how to correct the static shift efficiently, then emphasizes on the theories of the popular impedance tensor decomposition methods as Phase-sensitivity method, Groom and Bailey method, General tensor-analyzed method and Mohr circle-analyzed method. The kernal step of magnetotelluric data-processing is inversion, which is also an important content of the article. Firstly, the article introduces the basic theories of both the popular one-dimensional inversion methods as Automod, Occam, Rhoplus, Bostick and Ipi2win and the two-dimensional inversion methods as Occam, Rebocc, Abie and Nlcg. Then, the article is focused on parallel-analysis of the applying advantage of each inversion method with practical models, and obtains meaningful conclusion. Visual program design of magnetotelluric data-processing is another kernal part of the article. The bypast visual program design of magnetotelluric data-processing is not satisfied and systemic, for example, the data-processing method is single, the data-management is not systemic, the data format is not uniform. The article bases the visual program design of magnetotelluric data-processing upon practicability, structurality, variety and extensibility, and adopts database technology and mixed language program design method; finally, a magnetotelluric data management and processing system that integrates database saving and fetching system, data-processing system and graphical displaying system. Finally, the article comes onto the magnetotelluric application.takeing the Tulargen Cu-Ni mining area in Xingjiang as the practical example, using the data-processing methods introduced before, the article has a detailed introduction of magnetotelluric data interpretation procedure.
Resumo:
Ordos Basin is a typical cratonic petroliferous basin with 40 oil-gas bearing bed sets. It is featured as stable multicycle sedimentation, gentle formation, and less structures. The reservoir beds in Upper Paleozoic and Mesozoicare are mainly low density, low permeability, strong lateral change, and strong vertical heterogeneous. The well-known Loess Plateau in the southern area and Maowusu Desert, Kubuqi Desert and Ordos Grasslands in the northern area cover the basin, so seismic data acquisition in this area is very difficult and the data often takes on inadequate precision, strong interference, low signal-noise ratio, and low resolution. Because of the complicated condition of the surface and the underground, it is very difficult to distinguish the thin beds and study the land facies high-resolution lithologic sequence stratigraphy according to routine seismic profile. Therefore, a method, which have clearly physical significance, based on advanced mathematical physics theory and algorithmic and can improve the precision of the detection on the thin sand-peat interbed configurations of land facies, is in demand to put forward.Generalized S Transform (GST) processing method provides a new method of phase space analysis for seismic data. Compared with wavelet transform, both of them have very good localization characteristics; however, directly related to the Fourier spectra, GST has clearer physical significance, moreover, GST adopts a technology to best approach seismic wavelets and transforms the seismic data into time-scale domain, and breaks through the limit of the fixed wavelet in S transform, so GST has extensive adaptability. Based on tracing the development of the ideas and theories from wavelet transform, S transform to GST, we studied how to improve the precision of the detection on the thin stratum by GST.Noise has strong influence on sequence detecting in GST, especially in the low signal-noise ratio data. We studied the distribution rule of colored noise in GST domain, and proposed a technology to distinguish the signal and noise in GST domain. We discussed two types of noises: white noise and red noise, in which noise satisfy statistical autoregression model. For these two model, the noise-signal detection technology based on GST all get good result. It proved that the GST domain noise-signal detection technology could be used to real seismic data, and could effectively avoid noise influence on seismic sequence detecting.On the seismic profile after GST processing, high amplitude energy intensive zone, schollen, strip and lentoid dead zone and disarray zone maybe represent specifically geologic meanings according to given geologic background. Using seismic sequence detection profile and combining other seismic interpretation technologies, we can elaborate depict the shape of palaeo-geomorphology, effectively estimate sand stretch, distinguish sedimentary facies, determine target area, and directly guide oil-gas exploration.In the lateral reservoir prediction in XF oilfield of Ordos Basin, it played very important role in the estimation of sand stretch that the study of palaeo-geomorphology of Triassic System and the partition of inner sequence of the stratum group. According to the high-resolution seismic profile after GST processing, we pointed out that the C8 Member of Yanchang Formation in DZ area and C8 Member in BM area are the same deposit. It provided the foundation for getting 430 million tons predicting reserves and unite building 3 million tons off-take potential.In tackling key problem study for SLG gas-field, according to the high-resolution seismic sequence profile, we determined that the deposit direction of H8 member is approximately N-S or NNE-SS W. Using the seismic sequence profile, combining with layer-level profile, we can interpret the shape of entrenched stream. The sunken lenticle indicates the high-energy stream channel, which has stronger hydropower. By this way we drew out three high-energy stream channels' outline, and determined the target areas for exploitation. Finding high-energy braided river by high-resolution sequence processing is the key technology in SLG area.In ZZ area, we studied the distribution of the main reservoir bed-S23, which is shallow delta thin sand bed, by GST processing. From the seismic sequence profile, we discovered that the schollen thick sand beds are only local distributed, and most of them are distributary channel sand and distributary bar deposit. Then we determined that the S23 sand deposit direction is NW-SE in west, N-S in central and NE-SW in east. The high detecting seismic sequence interpretation profiles have been tested by 14 wells, 2 wells mismatch and the coincidence rate is 85.7%. Based on the profiles we suggested 3 predicted wells, one well (Yu54) completed and the other two is still drilling. The completed on Is coincident with the forecastThe paper testified that GST is a effective technology to get high- resolution seismic sequence profile, compartmentalize deposit microfacies, confirm strike direction of sandstone and make sure of the distribution range of oil-gas bearing sandstone, and is the gordian technique for the exploration of lithologic gas-oil pool in complicated areas.
Resumo:
Capable of three-dimensional imaging of the cornea with micrometer-scale resolution, spectral domain-optical coherence tomography (SDOCT) offers potential advantages over Placido ring and Scheimpflug photography based systems for accurate extraction of quantitative keratometric parameters. In this work, an SDOCT scanning protocol and motion correction algorithm were implemented to minimize the effects of patient motion during data acquisition. Procedures are described for correction of image data artifacts resulting from 3D refraction of SDOCT light in the cornea and from non-idealities of the scanning system geometry performed as a pre-requisite for accurate parameter extraction. Zernike polynomial 3D reconstruction and a recursive half searching algorithm (RHSA) were implemented to extract clinical keratometric parameters including anterior and posterior radii of curvature, central cornea optical power, central corneal thickness, and thickness maps of the cornea. Accuracy and repeatability of the extracted parameters obtained using a commercial 859nm SDOCT retinal imaging system with a corneal adapter were assessed using a rigid gas permeable (RGP) contact lens as a phantom target. Extraction of these parameters was performed in vivo in 3 patients and compared to commercial Placido topography and Scheimpflug photography systems. The repeatability of SDOCT central corneal power measured in vivo was 0.18 Diopters, and the difference observed between the systems averaged 0.1 Diopters between SDOCT and Scheimpflug photography, and 0.6 Diopters between SDOCT and Placido topography.
Resumo:
Developed for use with triple GEM detectors, the GEM Electronic Board (GEB) forms a crucial part of the electronics readout system being developed as part of the CMS muon upgrade program. The objective of the GEB is threefold; to provide stable powering and ground for the VFAT3 front ends, to enable high-speed communication between 24 VFAT3 front ends and an optohybrid, and to shield the GEM detector from electromagnetic interference. The paper describes the concept and design of a large-size GEB in detail, highlighting the challenges in terms of design and feasibility of this deceptively difficult system component.
Resumo:
The NERC Earth Observation Data Acquisition and Analysis Service (NEODAAS) provides a central point of Earth Observation (EO) satellite data access and expertise for UK researchers. The service is tailored to individual users’ requirements to ensure that researchers can focus effort on their science, rather than struggling with correct use of unfamiliar satellite data.
Resumo:
The NERC Earth Observation Data Acquisition and Analysis Service (NEODAAS) provides a central point of Earth Observation (EO) satellite data access and expertise for UK researchers. The service is tailored to individual users’ requirements to ensure that researchers can focus effort on their science, rather than struggling with correct use of unfamiliar satellite data.
Resumo:
The introduction of functional data into the radiotherapy treatment planning process is currently the focus of significant commercial, technical, scientific and clinical development. The potential of such data from positron emission tomography (PET) was recognized at an early stage and was integrated into the radiotherapy treatment planning process through the use of image fusion software. The combination of PET and CT in a single system (PET/CT) to form an inherently fused anatomical and functional dataset has provided an imaging modality which could be used as the prime tool in the delineation of tumour volumes and the preparation of patient treatment plans, especially when integrated with virtual simulation. PET imaging typically using F-Fluorodeoxyglucose (F-FDG) can provide data on metabolically active tumour volumes. These functional data have the potential to modify treatment volumes and to guide treatment delivery to cells with particular metabolic characteristics. This paper reviews the current status of the integration of PET and PET/CT data into the radiotherapy treatment process. Consideration is given to the requirements of PET/CT data acquisition with reference to patient positioning aids and the limitations imposed by the PET/CT system. It also reviews the approaches being taken to the definition of functional/ tumour volumes and the mechanisms available to measure and include physiological motion into the imaging process. The use of PET data must be based upon a clear understanding of the interpretation and limitations of the functional signal. Protocols for the implementation of this development remain to be defined, and outcomes data based upon clinical trials are still awaited. © 2006 The British Institute of Radiology.
Resumo:
The impact of power fluctuations arising from fixed-speed wind turbines on the magnitude and frequency of inter-area oscillations was investigated. The authors used data acquisition equipment to record the power flow on the interconnector between the Northern Ireland and Republic of Ireland systems. By monitoring the interconnector oscillation using a fast Fourier transform, it was possible to determine the magnitude and frequency of the inter-area oscillation between the Northern Ireland electricity system and that of the electricity supply board. Analysis was preformed to determine the relationship (if any) between the inter-area oscillation and the observed wind power generation at the corresponding time. Subsequently, regression analysis was introduced to model this relationship between the FFT output and the wind power generation. The effect of conventional generators on the magnitude and frequency of the inter-area oscillation was also considered.
Resumo:
Data processing is an essential part of Acoustic Doppler Profiler (ADP) surveys, which have become the standard tool in assessing flow characteristics at tidal power development sites. In most cases, further processing beyond the capabilities of the manufacturer provided software tools is required. These additional tasks are often implemented by every user in mathematical toolboxes like MATLAB, Octave or Python. This requires the transfer of the data from one system to another and thus increases the possibility of errors. The application of dedicated tools for visualisation of flow or geographic data is also often beneficial and a wide range of tools are freely available, though again problems arise from the necessity of transferring the data. Furthermore, almost exclusively PCs are supported directly by the ADP manufacturers, whereas small computing solutions like tablet computers, often running Android or Linux operating systems, seem better suited for online monitoring or data acquisition in field conditions. While many manufacturers offer support for developers, any solution is limited to a single device of a single manufacturer. A common data format for all ADP data would allow development of applications and quicker distribution of new post processing methodologies across the industry.
Resumo:
Cyber threats in Supervisory Control and Data Acquisition (SCADA) systems have the potential to render physical damage and jeopardize power system operation, safety and stability. SCADA systems were originally designed with little consideration of escalating cyber threats and hence the problem of how to develop robust intrusion detection technologies to tailor the requirements of SCADA is an emerging topic and a big challenge. This paper proposes a stateful Intrusion Detection System (IDS) using a Deep Packet Inspection (DPI) method to improve the cyber-security of SCADA systems using the IEC 60870-5-104 protocol which is tailored for basic telecontrol communications. The proposed stateful protocol analysis approach is presented that is designed specifically for the IEC 60870-5-104 protocol. Finally, the novel intrusion detection approach are implemented and validated.
Resumo:
The future European power system will have a hierarchical structure created by layers of system control from a Supergrid via regional high-voltage transmission through to medium and low-voltage distribution. Each level will have generation sources such as large-scale offshore wind, wave, solar thermal, nuclear directly connected to this Supergrid and high levels of embedded generation, connected to the medium-voltage distribution system. It is expected that the fuel portfolio will be dominated by offshore wind in Northern Europe and PV in Southern Europe. The strategies required to manage the coordination of supply-side variability with demand-side variability will include large scale interconnection, demand side management, load aggregation and storage in the context of the Supergrid combined with the Smart Grid. The design challenge associated with this will not only include control topology, data acquisition, analysis and communications technologies, but also the selection of fuel portfolio at a macro level. This paper quantifies the amount of demand side management, storage and so-called 'back-up generation' needed to support an 80% renewable energy portfolio in Europe by 2050. © 2013 IEEE.
Resumo:
Cloud data centres are critical business infrastructures and the fastest growing service providers. Detecting anomalies in Cloud data centre operation is vital. Given the vast complexity of the data centre system software stack, applications and workloads, anomaly detection is a challenging endeavour. Current tools for detecting anomalies often use machine learning techniques, application instance behaviours or system metrics distribu- tion, which are complex to implement in Cloud computing environments as they require training, access to application-level data and complex processing. This paper presents LADT, a lightweight anomaly detection tool for Cloud data centres that uses rigorous correlation of system metrics, implemented by an efficient corre- lation algorithm without need for training or complex infrastructure set up. LADT is based on the hypothesis that, in an anomaly-free system, metrics from data centre host nodes and virtual machines (VMs) are strongly correlated. An anomaly is detected whenever correlation drops below a threshold value. We demonstrate and evaluate LADT using a Cloud environment, where it shows that the hosting node I/O operations per second (IOPS) are strongly correlated with the aggregated virtual machine IOPS, but this correlation vanishes when an application stresses the disk, indicating a node-level anomaly.
Resumo:
Highway structures such as bridges are subject to continuous degradation primarily due to ageing and environmental factors. A rational transport policy requires the monitoring of this transport infrastructure to provide adequate maintenance and guarantee the required levels of transport service and safety. In Europe, this is now a legal requirement - a European Directive requires all member states of the European Union to implement a Bridge Management System. However, the process is expensive, requiring the installation of sensing equipment and data acquisition electronics on the bridge. This paper investigates the use of an instrumented vehicle fitted with accelerometers on its axles to monitor the dynamic behaviour of bridges as an indicator of its structural condition. This approach eliminates the need for any on-site installation of measurement equipment. A simplified half-car vehicle-bridge interaction model is used in theoretical simulations to test the possibility of extracting the dynamic parameters of the bridge from the spectra of the vehicle accelerations. The effect of vehicle speed, vehicle mass and bridge span length on the detection of the bridge dynamic parameters are investigated. The algorithm is highly sensitive to the condition of the road profile and simulations are carried out for both smooth and rough profiles
Resumo:
In recent years, Structural Health Monitoring (SHM) systems have been developed to monitor bridge deterioration, assess real load levels and hence extend bridge life and safety. A road bridge is only safe if the stresses caused by the passing vehicles are less than the capacity of the bridge to resist them. Conventional SHM systems can be used to improve knowledge of the bridges capacity to resist stresses but generally give no information on the causes of any increase in stresses (based on measuring strain). The concept of in Bridge Weigh-in-Motion (B-WIM) is to establish axle loads, without interruption to traffic flow, by using strain sensors at a bridge soffit and subsequently converting the data to real time axle loads or stresses. Recent studies have shown it would be most beneficial to develop a portable system which can be easily attached to existing and new bridge structures for a specified monitoring period. The sensors could then be left in place while the data acquisition can be moved for various other sites. Therefore it is necessary to find accurate sensors capable of capturing peak strains under dynamic load and suitable methods for attaching these strain sensors to existing and new bridge structures. Additionally, it is important to ensure accurate strain transfer between concrete and steel, the adhesives layer and the strain sensor. This paper describes research investigating the suitably of using various sensors for the monitoring of concrete structures under dynamic vehicle load. Electrical resistance strain (ERS) gauges, vibrating wire (VW) gauges and fibre optic sensors (FOS) are commonly used for SHM. A comparative study will be carried out to select a suitable sensor for a bridge Weigh in Motion System. This study will look at fixing methods, durability, scanning rate and accuracy range. Finite element modeling is used to predict the strains which are then validated in laboratory trials.