215 resultados para preprocessing
Resumo:
Oil and scientific groups have been focusing on the 3D wave equation prestack depth migration since it can solve the complex problems of the geologic structure accurately and maintain the wave information, which is propitious to lithology imaging. The symplectic method was brought up by Feng Kang firstly in 1984 and became the hotspot of numerical computation study. It will be widely applied in many scientific field of necessity because of its great virtue in scientific sense. This paper combines the Symplectic method and the 3-D wave equation prestack depth migration to bring up an effectual numerical computation method of wave field extrapolatation technique under the scientific background mentioned above. At the base of deep analysis of computation method and the performance of PC cluster, a seismic prestack depth migration flow considering the virtue of both seismic migration method and Pc cluster has formatted. The software, named 3D Wave Equation Prestack Depth Migration of Symplectic Method, which is based on the flow, has been enrolled in the National Bureau of Copyright (No. 0013767). Dagang and Daqing Oil Field have now put it into use in the field data processing. In this paper, the one way wave equation operator is decompounded into a phase shift operator and a time shift operator and the correct item with high rank Symplectic method when approaching E exponent. After reviewing eliminating alias frequency of operator, computing the maximum angle of migration and the imaging condition, we present the test result of impulse response of the Symplectic method. Taking the imaging results of the SEG/EAGE salt and overthrust models for example and seeing about the imaging ability with complex geologic structure of our software system, the paper has discussed the effect of the selection of imaging parameters and the effectuation on the migration result of the seismic wavelet and compared the 2-D and 3-D prestack depth migration result of the salt mode. We also present the test result of impulse response with the overthrust model. The imaging result of the two international models indicates that the Symplectic method of 3-D prestack depth migration accommodates great transversal velocity variation and complex geologic structure. The huge computing cost is the key obstruction that 3-D prestack depth migration wave equation cannot be adopted by oil industry. After deep analysis of prestack depth migration flow and the character of PC cluster ,the paper put forward :i)parallel algorithms in shot and frequency domain of the common shot gather 3-D wave equation prestack migration; ii)the optimized setting scheme of breakpoint in field data processing; iii)dynamic and static load balance among the nodes of the PC cluster in the 3-D prestack depth migration. It has been proven that computation periods of the 3-D prestack depth migration imaging are greatly shortened given that adopting the computing method mentioned in the paper. In addition,considering the 3-D wave equation prestack depth migration flow in complex medium and examples of the field data processing, the paper put the emphasis on: i)seismic data relative preprocessing, ii) 2.5D prestack depth migration velocity analysis, iii)3D prestack depth migration. The result of field data processing shows satisfied application ability of the flow put forward in the paper.
Resumo:
At first, the article has an introduction of the basic theory of magnetotelluric and the essential methods of data acquisition and preprocessing. After that, the article introduces the methods together with their predominance of computing transfering function such as the Least-square method, the Remote-Reference method and the Robust method. The article also describe the cause and influence of static shift, and has a summarize of how to correct the static shift efficiently, then emphasizes on the theories of the popular impedance tensor decomposition methods as Phase-sensitivity method, Groom and Bailey method, General tensor-analyzed method and Mohr circle-analyzed method. The kernal step of magnetotelluric data-processing is inversion, which is also an important content of the article. Firstly, the article introduces the basic theories of both the popular one-dimensional inversion methods as Automod, Occam, Rhoplus, Bostick and Ipi2win and the two-dimensional inversion methods as Occam, Rebocc, Abie and Nlcg. Then, the article is focused on parallel-analysis of the applying advantage of each inversion method with practical models, and obtains meaningful conclusion. Visual program design of magnetotelluric data-processing is another kernal part of the article. The bypast visual program design of magnetotelluric data-processing is not satisfied and systemic, for example, the data-processing method is single, the data-management is not systemic, the data format is not uniform. The article bases the visual program design of magnetotelluric data-processing upon practicability, structurality, variety and extensibility, and adopts database technology and mixed language program design method; finally, a magnetotelluric data management and processing system that integrates database saving and fetching system, data-processing system and graphical displaying system. Finally, the article comes onto the magnetotelluric application.takeing the Tulargen Cu-Ni mining area in Xingjiang as the practical example, using the data-processing methods introduced before, the article has a detailed introduction of magnetotelluric data interpretation procedure.
Resumo:
Rapid judgments about the properties and spatial relations of objects are the crux of visually guided interaction with the world. Vision begins, however, with essentially pointwise representations of the scene, such as arrays of pixels or small edge fragments. For adequate time-performance in recognition, manipulation, navigation, and reasoning, the processes that extract meaningful entities from the pointwise representations must exploit parallelism. This report develops a framework for the fast extraction of scene entities, based on a simple, local model of parallel computation.sAn image chunk is a subset of an image that can act as a unit in the course of spatial analysis. A parallel preprocessing stage constructs a variety of simple chunks uniformly over the visual array. On the basis of these chunks, subsequent serial processes locate relevant scene components and assemble detailed descriptions of them rapidly. This thesis defines image chunks that facilitate the most potentially time-consuming operations of spatial analysis---boundary tracing, area coloring, and the selection of locations at which to apply detailed analysis. Fast parallel processes for computing these chunks from images, and chunk-based formulations of indexing, tracing, and coloring, are presented. These processes have been simulated and evaluated on the lisp machine and the connection machine.
Resumo:
M. H. Lee, and S. M. Garrett, Qualitative modelling of unknown interface behaviour, International Journal of Human Computer Studies, Vol. 53, No. 4, pp. 493-515, 2000
Resumo:
We consider a Delay Tolerant Network (DTN) whose users (nodes) are connected by an underlying Mobile Ad hoc Network (MANET) substrate. Users can declaratively express high-level policy constraints on how "content" should be routed. For example, content may be diverted through an intermediary DTN node for the purposes of preprocessing, authentication, etc. To support such capability, we implement Predicate Routing [7] where high-level constraints of DTN nodes are mapped into low-level routing predicates at the MANET level. Our testbed uses a Linux system architecture and leverages User Mode Linux [2] to emulate every node running a DTN Reference Implementation code [5]. In our initial prototype, we use the On Demand Distance Vector (AODV) MANET routing protocol. We use the network simulator ns-2 (ns-emulation version) to simulate the mobility and wireless connectivity of both DTN and MANET nodes. We show preliminary throughput results showing the efficient and correct operation of propagating routing predicates, and as a side effect, the performance benefit of content re-routing that dynamically (on-demand) breaks the underlying end-to-end TCP connection into shorter-length TCP connections.
Resumo:
This thesis elaborates on the problem of preprocessing a large graph so that single-pair shortest-path queries can be answered quickly at runtime. Computing shortest paths is a well studied problem, but exact algorithms do not scale well to real-world huge graphs in applications that require very short response time. The focus is on approximate methods for distance estimation, in particular in landmarks-based distance indexing. This approach involves choosing some nodes as landmarks and computing (offline), for each node in the graph its embedding, i.e., the vector of its distances from all the landmarks. At runtime, when the distance between a pair of nodes is queried, it can be quickly estimated by combining the embeddings of the two nodes. Choosing optimal landmarks is shown to be hard and thus heuristic solutions are employed. Given a budget of memory for the index, which translates directly into a budget of landmarks, different landmark selection strategies can yield dramatically different results in terms of accuracy. A number of simple methods that scale well to large graphs are therefore developed and experimentally compared. The simplest methods choose central nodes of the graph, while the more elaborate ones select central nodes that are also far away from one another. The efficiency of the techniques presented in this thesis is tested experimentally using five different real world graphs with millions of edges; for a given accuracy, they require as much as 250 times less space than the current approach which considers selecting landmarks at random. Finally, they are applied in two important problems arising naturally in large-scale graphs, namely social search and community detection.
Resumo:
We study the problem of preprocessing a large graph so that point-to-point shortest-path queries can be answered very fast. Computing shortest paths is a well studied problem, but exact algorithms do not scale to huge graphs encountered on the web, social networks, and other applications. In this paper we focus on approximate methods for distance estimation, in particular using landmark-based distance indexing. This approach involves selecting a subset of nodes as landmarks and computing (offline) the distances from each node in the graph to those landmarks. At runtime, when the distance between a pair of nodes is needed, we can estimate it quickly by combining the precomputed distances of the two nodes to the landmarks. We prove that selecting the optimal set of landmarks is an NP-hard problem, and thus heuristic solutions need to be employed. Given a budget of memory for the index, which translates directly into a budget of landmarks, different landmark selection strategies can yield dramatically different results in terms of accuracy. A number of simple methods that scale well to large graphs are therefore developed and experimentally compared. The simplest methods choose central nodes of the graph, while the more elaborate ones select central nodes that are also far away from one another. The efficiency of the suggested techniques is tested experimentally using five different real world graphs with millions of edges; for a given accuracy, they require as much as 250 times less space than the current approach in the literature which considers selecting landmarks at random. Finally, we study applications of our method in two problems arising naturally in large-scale networks, namely, social search and community detection.
Resumo:
We consider a Delay Tolerant Network (DTN) whose users (nodes) are connected by an underlying Mobile Ad hoc Network (MANET) substrate. Users can declaratively express high-level policy constraints on how “content” should be routed. For example, content can be directed through an intermediary DTN node for the purposes of preprocessing, authentication, etc., or content from a malicious MANET node can be dropped. To support such content routing at the DTN level, we implement Predicate Routing [1] where high-level constraints of DTN nodes are mapped into low-level routing predicates within the MANET nodes. Our testbed [2] uses a Linux system architecture with User Mode Linux [3] to emulate every DTN node with a DTN Reference Implementation code [4]. In our initial architecture prototype, we use the On Demand Distance Vector (AODV) routing protocol at the MANET level. We use the network simulator ns-2 (ns-emulation version) to simulate the wireless connectivity of both DTN and MANET nodes. Preliminary results show the efficient and correct operation of propagating routing predicates. For the application of content re-routing through an intermediary, as a side effect, results demonstrate the performance benefit of content re-routing that dynamically (on-demand) breaks the underlying end-to-end TCP connections into shorter-length TCP connections.
Resumo:
A neural model of peripheral auditory processing is described and used to separate features of coarticulated vowels and consonants. After preprocessing of speech via a filterbank, the model splits into two parallel channels, a sustained channel and a transient channel. The sustained channel is sensitive to relatively stable parts of the speech waveform, notably synchronous properties of the vocalic portion of the stimulus it extends the dynamic range of eighth nerve filters using coincidence deteectors that combine operations of raising to a power, rectification, delay, multiplication, time averaging, and preemphasis. The transient channel is sensitive to critical features at the onsets and offsets of speech segments. It is built up from fast excitatory neurons that are modulated by slow inhibitory interneurons. These units are combined over high frequency and low frequency ranges using operations of rectification, normalization, multiplicative gating, and opponent processing. Detectors sensitive to frication and to onset or offset of stop consonants and vowels are described. Model properties are characterized by mathematical analysis and computer simulations. Neural analogs of model cells in the cochlear nucleus and inferior colliculus are noted, as are psychophysical data about perception of CV syllables that may be explained by the sustained transient channel hypothesis. The proposed sustained and transient processing seems to be an auditory analog of the sustained and transient processing that is known to occur in vision.
Resumo:
This article describes further evidence for a new neural network theory of biological motion perception that is called a Motion Boundary Contour System. This theory clarifies why parallel streams Vl-> V2 and Vl-> MT exist for static form and motion form processing among the areas Vl, V2, and MT of visual cortex. The Motion Boundary Contour System consists of several parallel copies, such that each copy is activated by a different range of receptive field sizes. Each copy is further subdivided into two hierarchically organized subsystems: a Motion Oriented Contrast Filter, or MOC Filter, for preprocessing moving images; and a Cooperative-Competitive Feedback Loop, or CC Loop, for generating emergent boundary segmentations of the filtered signals. The present article uses the MOC Filter to explain a variety of classical and recent data about short-range and long-range apparent motion percepts that have not yet been explained by alternative models. These data include split motion; reverse-contrast gamma motion; delta motion; visual inertia; group motion in response to a reverse-contrast Ternus display at short interstimulus intervals; speed-up of motion velocity as interfiash distance increases or flash duration decreases; dependence of the transition from element motion to group motion on stimulus duration and size; various classical dependencies between flash duration, spatial separation, interstimulus interval, and motion threshold known as Korte's Laws; and dependence of motion strength on stimulus orientation and spatial frequency. These results supplement earlier explanations by the model of apparent motion data that other models have not explained; a recent proposed solution of the global aperture problem, including explanations of motion capture and induced motion; an explanation of how parallel cortical systems for static form perception and motion form perception may develop, including a demonstration that these parallel systems are variations on a common cortical design; an explanation of why the geometries of static form and motion form differ, in particular why opposite orientations differ by 90°, whereas opposite directions differ by 180°, and why a cortical stream Vl -> V2 -> MT is needed; and a summary of how the main properties of other motion perception models can be assimilated into different parts of the Motion Boundary Contour System design.
Resumo:
The Fuzzy ART system introduced herein incorporates computations from fuzzy set theory into ART 1. For example, the intersection (n) operator used in ART 1 learning is replaced by the MIN operator (A) of fuzzy set theory. Fuzzy ART reduces to ART 1 in response to binary input vectors, but can also learn stable categories in response to analog input vectors. In particular, the MIN operator reduces to the intersection operator in the binary case. Learning is stable because all adaptive weights can only decrease in time. A preprocessing step, called complement coding, uses on-cell and off-cell responses to prevent category proliferation. Complement coding normalizes input vectors while preserving the amplitudes of individual feature activations.
Resumo:
A Fuzzy ART model capable of rapid stable learning of recognition categories in response to arbitrary sequences of analog or binary input patterns is described. Fuzzy ART incorporates computations from fuzzy set theory into the ART 1 neural network, which learns to categorize only binary input patterns. The generalization to learning both analog and binary input patterns is achieved by replacing appearances of the intersection operator (n) in AHT 1 by the MIN operator (Λ) of fuzzy set theory. The MIN operator reduces to the intersection operator in the binary case. Category proliferation is prevented by normalizing input vectors at a preprocessing stage. A normalization procedure called complement coding leads to a symmetric theory in which the MIN operator (Λ) and the MAX operator (v) of fuzzy set theory play complementary roles. Complement coding uses on-cells and off-cells to represent the input pattern, and preserves individual feature amplitudes while normalizing the total on-cell/off-cell vector. Learning is stable because all adaptive weights can only decrease in time. Decreasing weights correspond to increasing sizes of category "boxes". Smaller vigilance values lead to larger category boxes. Learning stops when the input space is covered by boxes. With fast learning and a finite input set of arbitrary size and composition, learning stabilizes after just one presentation of each input pattern. A fast-commit slow-recode option combines fast learning with a forgetting rule that buffers system memory against noise. Using this option, rare events can be rapidly learned, yet previously learned memories are not rapidly erased in response to statistically unreliable input fluctuations.
Resumo:
A new neural network architecture is introduced for incremental supervised learning of recognition categories and multidimensional maps in response to arbitrary sequences of analog or binary input vectors. The architecture, called Fuzzy ARTMAP, achieves a synthesis of fuzzy logic and Adaptive Resonance Theory (ART) neural networks by exploiting a close formal similarity between the computations of fuzzy subsethood and ART category choice, resonance, and learning. Fuzzy ARTMAP also realizes a new Minimax Learning Rule that conjointly minimizes predictive error and maximizes code compression, or generalization. This is achieved by a match tracking process that increases the ART vigilance parameter by the minimum amount needed to correct a predictive error. As a result, the system automatically learns a minimal number of recognition categories, or "hidden units", to met accuracy criteria. Category proliferation is prevented by normalizing input vectors at a preprocessing stage. A normalization procedure called complement coding leads to a symmetric theory in which the MIN operator (Λ) and the MAX operator (v) of fuzzy logic play complementary roles. Complement coding uses on-cells and off-cells to represent the input pattern, and preserves individual feature amplitudes while normalizing the total on-cell/off-cell vector. Learning is stable because all adaptive weights can only decrease in time. Decreasing weights correspond to increasing sizes of category "boxes". Smaller vigilance values lead to larger category boxes. Improved prediction is achieved by training the system several times using different orderings of the input set. This voting strategy can also be used to assign probability estimates to competing predictions given small, noisy, or incomplete training sets. Four classes of simulations illustrate Fuzzy ARTMAP performance as compared to benchmark back propagation and genetic algorithm systems. These simulations include (i) finding points inside vs. outside a circle; (ii) learning to tell two spirals apart; (iii) incremental approximation of a piecewise continuous function; and (iv) a letter recognition database. The Fuzzy ARTMAP system is also compared to Salzberg's NGE system and to Simpson's FMMC system.
Resumo:
This paper compares the complexity of the sphere decoder (SD) and a previously proposed detection scheme, denoted here as block SD (BSD), when they are applied to the detection of multiple-input multiple-output (MIMO) systems in frequency-selective channels. The complexity of both algorithms depends on their preprocessing and tree search stages. Although the BSD was proposed as a means of greatly reducing the complexity of the preprocessing stage of the SD, no study was done on how the complexity of the tree search stage could be affected by that reduced preprocessing stage. This paper shows, both analytically and through simulation, that the reduction in preprocessing complexity provided by the BSD has the side effect of increasing the complexity of its tree search stage compared to that of the SD, independent of the signal-to-noise ratio (SNR). In addition, this paper shows how sorting the columns of the frequency-selective channel matrix in the SD does not reduce the complexity of the tree search stage, contrary to what occurs in frequency-flat channels.
Resumo:
Logistic regression and Gaussian mixture model (GMM) classifiers have been trained to estimate the probability of acute myocardial infarction (AMI) in patients based upon the concentrations of a panel of cardiac markers. The panel consists of two new markers, fatty acid binding protein (FABP) and glycogen phosphorylase BB (GPBB), in addition to the traditional cardiac troponin I (cTnI), creatine kinase MB (CKMB) and myoglobin. The effect of using principal component analysis (PCA) and Fisher discriminant analysis (FDA) to preprocess the marker concentrations was also investigated. The need for classifiers to give an accurate estimate of the probability of AMI is argued and three categories of performance measure are described, namely discriminatory ability, sharpness, and reliability. Numerical performance measures for each category are given and applied. The optimum classifier, based solely upon the samples take on admission, was the logistic regression classifier using FDA preprocessing. This gave an accuracy of 0.85 (95% confidence interval: 0.78-0.91) and a normalised Brier score of 0.89. When samples at both admission and a further time, 1-6 h later, were included, the performance increased significantly, showing that logistic regression classifiers can indeed use the information from the five cardiac markers to accurately and reliably estimate the probability AMI. © Springer-Verlag London Limited 2008.