921 resultados para seismic data processing


Relevância:

90.00% 90.00%

Publicador:

Resumo:

The flanks of an oil-bearing structure were investigated to determine the most likely reservoir geometry in an area where the seismic path forks in preparation for a field equity redetermination. Two alternate hypotheses were evaluated: a “high fork model” where the reservoir top follows the higher of the two paths and a “low fork model” in which the reservoir follows the lower path. I took four approaches to evaluate the hypotheses: 1) Depth conversion by multiple velocity models to evaluate the fidelity of the picked horizon on models that did not contain a fork; 2) hand interpretation around the areas of high uncertainty to eliminate their influence; 3) path choice effects on the plausibility of the environment of deposition; and subsurface geometry modeling with synthetics to compare calculated 1D seismic responses with current data. Investigation established that both fork interpretations cannot follow a continuous seismic reflector but are otherwise equally plausible. Interval modeling revealed several structure scenarios, supporting both high and low fork, which fit the seismic data. To augment the lower fork argument, a scenario with an additional sand interval off-structure is recommended, for simplicity and reasonability.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The Seattle Fault is an active east-west trending reverse fault zone that intersects both Seattle and Bellevue, two highly populated cities in Washington. Rupture along strands of the fault poses a serious threat to infrastructure and thousands of people in the region. Precise locations of fault strands are still poorly constrained in Bellevue due to blind thrusting, urban development, and/or erosion. Seismic reflection and aeromagnetic surveys have shed light on structural geometries of the fault zone in bedrock. However, the fault displaces both bedrock and unconsolidated Quaternary deposits, and seismic data are poor indicators of the locations of fault strands within the unconsolidated strata. Fortunately, evidence of past fault strand ruptures may also be recorded indirectly by fluvial processes and should also be observable in the subsurface. I analyzed hillslope and river geomorphology using LiDAR data and ArcGIS to locate surface fault traces and then compare/correlate these findings to subsurface offsets identified using borehole data. Geotechnical borings were used to locate one fault offset and provide input to a cross section of the fault constructed using Rockworks software. Knickpoints, which may correlate to fault rupture, were found upstream of this newly identified fault offset as well as upstream of a previously known fault segment.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The schema of an information system can significantly impact the ability of end users to efficiently and effectively retrieve the information they need. Obtaining quickly the appropriate data increases the likelihood that an organization will make good decisions and respond adeptly to challenges. This research presents and validates a methodology for evaluating, ex ante, the relative desirability of alternative instantiations of a model of data. In contrast to prior research, each instantiation is based on a different formal theory. This research theorizes that the instantiation that yields the lowest weighted average query complexity for a representative sample of information requests is the most desirable instantiation for end-user queries. The theory was validated by an experiment that compared end-user performance using an instantiation of a data structure based on the relational model of data with performance using the corresponding instantiation of the data structure based on the object-relational model of data. Complexity was measured using three different Halstead metrics: program length, difficulty, and effort. For a representative sample of queries, the average complexity using each instantiation was calculated. As theorized, end users querying the instantiation with the lower average complexity made fewer semantic errors, i.e., were more effective at composing queries. (c) 2005 Elsevier B.V. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background and purpose Survey data quality is a combination of the representativeness of the sample, the accuracy and precision of measurements, data processing and management with several subcomponents in each. The purpose of this paper is to show how, in the final risk factor surveys of the WHO MONICA Project, information on data quality were obtained, quantified, and used in the analysis. Methods and results In the WHO MONICA (Multinational MONItoring of trends and determinants in CArdiovascular disease) Project, the information about the data quality components was documented in retrospective quality assessment reports. On the basis of the documented information and the survey data, the quality of each data component was assessed and summarized using quality scores. The quality scores were used in sensitivity testing of the results both by excluding populations with low quality scores and by weighting the data by its quality scores. Conclusions Detailed documentation of all survey procedures with standardized protocols, training, and quality control are steps towards optimizing data quality. Quantifying data quality is a further step. Methods used in the WHO MONICA Project could be adopted to improve quality in other health surveys.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Although managers consider accurate, timely, and relevant information as critical to the quality of their decisions, evidence of large variations in data quality abounds. Over a period of twelve months, the action research project reported herein attempted to investigate and track data quality initiatives undertaken by the participating organisation. The investigation focused on two types of errors: transaction input errors and processing errors. Whenever the action research initiative identified non-trivial errors, the participating organisation introduced actions to correct the errors and prevent similar errors in the future. Data quality metrics were taken quarterly to measure improvements resulting from the activities undertaken during the action research project. The action research project results indicated that for a mission-critical database to ensure and maintain data quality, commitment to continuous data quality improvement is necessary. Also, communication among all stakeholders is required to ensure common understanding of data quality improvement goals. The action research project found that to further substantially improve data quality, structural changes within the organisation and to the information systems are sometimes necessary. The major goal of the action research study is to increase the level of data quality awareness within all organisations and to motivate them to examine the importance of achieving and maintaining high-quality data.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Even when data repositories exhibit near perfect data quality, users may formulate queries that do not correspond to the information requested. Users’ poor information retrieval performance may arise from either problems understanding of the data models that represent the real world systems, or their query skills. This research focuses on users’ understanding of the data structures, i.e., their ability to map the information request and the data model. The Bunge-Wand-Weber ontology was used to formulate three sets of hypotheses. Two laboratory experiments (one using a small data model and one using a larger data model) tested the effect of ontological clarity on users’ performance when undertaking component, record, and aggregate level tasks. The results indicate for the hypotheses associated with different representations but equivalent semantics that parsimonious data model participants performed better for component level tasks but that ontologically clearer data model participants performed better for record and aggregate level tasks.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The optimum bandwidth for shallow, high-resolution seismic reflection differs from that required for conventional petroleum reflection. An understanding of this issue is essential for correct choice of acquisition instrumentation. Numerical modelling of simple Bowen Basin coal structures illustrates that, for high-resolution imaging, it is important to accurately record all frequencies up to the limit imposed by earth scattering. On the contrary, the seismic image is much less dependent on frequencies at the lower end of the spectrum. These quantitative observations support the use of specialised high-frequency geophones for high-resolution seismic imaging. Synthetic seismic inversion trials demonstrate that, irrespective of the bandwidth of the seismic data, additional low-frequency impedance control is essential for accurate inversion. Inversion provides no compelling argument for the use of conventional petroleum geophones in the high-resolution arena.

Relevância:

90.00% 90.00%

Publicador:

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper reviews some basic issues and methods involved in using neural networks to respond in a desired fashion to a temporally-varying environment. Some popular network models and training methods are introduced. A speech recognition example is then used to illustrate the central difficulty of temporal data processing: learning to notice and remember relevant contextual information. Feedforward network methods are applicable to cases where this problem is not severe. The application of these methods are explained and applications are discussed in the areas of pure mathematics, chemical and physical systems, and economic systems. A more powerful but less practical algorithm for temporal problems, the moving targets algorithm, is sketched and discussed. For completeness, a few remarks are made on reinforcement learning.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

All-optical data processing is expected to play a major role in future optical communications. The fiber nonlinear optical loop mirror (NOLM) is a valuable tool in optical signal processing applications. This paper presents an overview of our recent advances in developing NOLM-based all-optical processing techniques for application in fiber-optic communications. The use of in-line NOLMs as a general technique for all-optical passive 2R (reamplification, reshaping) regeneration of return-to-zero (RZ) on-off keyed signals in both high-speed, ultralong-distance transmission systems and terrestrial photonic networks is reviewed. In this context, a theoretical model enabling the description of the stable propagation of carrier pulses with periodic all-optical self-regeneration in fiber systems with in-line deployment of nonlinear optical devices is presented. A novel, simple pulse processing scheme using nonlinear broadening in normal dispersion fiber and loop mirror intensity filtering is described, and its employment is demonstrated as an optical decision element at a RZ receiver as well as an in-line device to realize a transmission technique of periodic all-optical RZ-nonreturn-to-zero-like format conversion. The important issue of phase-preserving regeneration of phase-encoded signals is also addressed by presenting a new design of NOLM based on distributed Raman amplification in the loop fiber. © 2008 Elsevier Inc. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis describes the development of a complete data visualisation system for large tabular databases, such as those commonly found in a business environment. A state-of-the-art 'cyberspace cell' data visualisation technique was investigated and a powerful visualisation system using it was implemented. Although allowing databases to be explored and conclusions drawn, it had several drawbacks, the majority of which were due to the three-dimensional nature of the visualisation. A novel two-dimensional generic visualisation system, known as MADEN, was then developed and implemented, based upon a 2-D matrix of 'density plots'. MADEN allows an entire high-dimensional database to be visualised in one window, while permitting close analysis in 'enlargement' windows. Selections of records can be made and examined, and dependencies between fields can be investigated in detail. MADEN was used as a tool for investigating and assessing many data processing algorithms, firstly data-reducing (clustering) methods, then dimensionality-reducing techniques. These included a new 'directed' form of principal components analysis, several novel applications of artificial neural networks, and discriminant analysis techniques which illustrated how groups within a database can be separated. To illustrate the power of the system, MADEN was used to explore customer databases from two financial institutions, resulting in a number of discoveries which would be of interest to a marketing manager. Finally, the database of results from the 1992 UK Research Assessment Exercise was analysed. Using MADEN allowed both universities and disciplines to be graphically compared, and supplied some startling revelations, including empirical evidence of the 'Oxbridge factor'.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

All-optical data processing is expected to play a major role in future optical communications. Nonlinear effects in optical fibres have many attractive features and a great, not yet fully explored potential in optical signal processing. Here, we overview our recent advances in developing novel techniques and approaches to all-optical processing based on optical fibre nonlinearities.