969 resultados para seismic data processing
Resumo:
A detailed analysis of the morphology and the Holocene seismic and sequence stratigraphy and architecture of the infralittoral sedimentary environment of the El Masnou coast (Catalonia, NW Mediterranean Sea) was carried out using multibeam bathymetry and GeoPulse seismic data. This environment extends down to 26-30 m water depth, and is defined morphologically by two depositional wedges whose seafloor is affected by erosive furrows, slides, fields of large- and small-scale wavy bedforms, and dredging trenches and pits. Erosive terraces are also identified in the transition domain toward the inner continental shelf. The Holocene stratigraphy of the infralittoral environment is defined by two major seismic sequences (lower and upper), each one formed by internal seismic units. The sequences and units are characterised by downlapping surfaces made up of deposits formed by progradation of coastal lithosomes. The stratigraphy and stratal architecture, displaying a retrogradational arrangement with progradational patterns of minor order, were controlled by different sea-level positions. The stratigraphic division represents the coastal response to the last fourth-order transgressive and highstand conditions, modulated by small-scale sea-level oscillations (≈1-2 m) of fith to sixth order. This study also highlights the advantage of an integrated analysis using acoustic/seismic methods for practical assessment of the anthropogenic effects on infralittoral domains based on the association of marine geological observations.
Resumo:
Identification of low-dimensional structures and main sources of variation from multivariate data are fundamental tasks in data analysis. Many methods aimed at these tasks involve solution of an optimization problem. Thus, the objective of this thesis is to develop computationally efficient and theoretically justified methods for solving such problems. Most of the thesis is based on a statistical model, where ridges of the density estimated from the data are considered as relevant features. Finding ridges, that are generalized maxima, necessitates development of advanced optimization methods. An efficient and convergent trust region Newton method for projecting a point onto a ridge of the underlying density is developed for this purpose. The method is utilized in a differential equation-based approach for tracing ridges and computing projection coordinates along them. The density estimation is done nonparametrically by using Gaussian kernels. This allows application of ridge-based methods with only mild assumptions on the underlying structure of the data. The statistical model and the ridge finding methods are adapted to two different applications. The first one is extraction of curvilinear structures from noisy data mixed with background clutter. The second one is a novel nonlinear generalization of principal component analysis (PCA) and its extension to time series data. The methods have a wide range of potential applications, where most of the earlier approaches are inadequate. Examples include identification of faults from seismic data and identification of filaments from cosmological data. Applicability of the nonlinear PCA to climate analysis and reconstruction of periodic patterns from noisy time series data are also demonstrated. Other contributions of the thesis include development of an efficient semidefinite optimization method for embedding graphs into the Euclidean space. The method produces structure-preserving embeddings that maximize interpoint distances. It is primarily developed for dimensionality reduction, but has also potential applications in graph theory and various areas of physics, chemistry and engineering. Asymptotic behaviour of ridges and maxima of Gaussian kernel densities is also investigated when the kernel bandwidth approaches infinity. The results are applied to the nonlinear PCA and to finding significant maxima of such densities, which is a typical problem in visual object tracking.
Resumo:
One of the fundamental problems with image processing of petrographic thin sections is that the appearance (colour I intensity) of a mineral grain will vary with the orientation of the crystal lattice to the preferred direction of the polarizing filters on a petrographic microscope. This makes it very difficult to determine grain boundaries, grain orientation and mineral species from a single captured image. To overcome this problem, the Rotating Polarizer Stage was used to replace the fixed polarizer and analyzer on a standard petrographic microscope. The Rotating Polarizer Stage rotates the polarizers while the thin section remains stationary, allowing for better data gathering possibilities. Instead of capturing a single image of a thin section, six composite data sets are created by rotating the polarizers through 900 (or 1800 if quartz c-axes measurements need to be taken) in both plane and cross polarized light. The composite data sets can be viewed as separate images and consist of the average intensity image, the maximum intensity image, the minimum intensity image, the maximum position image, the minimum position image and the gradient image. The overall strategy used by the image processing system is to gather the composite data sets, determine the grain boundaries using the gradient image, classify the different mineral species present using the minimum and maximum intensity images and then perform measurements of grain shape and, where possible, partial crystallographic orientation using the maximum intensity and maximum position images.
Resumo:
The telemetry data processing operation intended for a given mission are pre-defined by an onboard telemetry configuration, mission trajectory and overall telemetry methodology have stabilized lately for ISRO vehicles. The given problem on telemetry data processing is reduced through hierarchical problem reduction whereby the sequencing of operations evolves as the control task and operations on data as the function task. The function task Input, Output and execution criteria are captured into tables which are examined by the control task and then schedules when the function task when the criteria is being met.
Resumo:
Analysis by reduction is a linguistically motivated method for checking correctness of a sentence. It can be modelled by restarting automata. In this paper we propose a method for learning restarting automata which are strictly locally testable (SLT-R-automata). The method is based on the concept of identification in the limit from positive examples only. Also we characterize the class of languages accepted by SLT-R-automata with respect to the Chomsky hierarchy.
Resumo:
Data mining means to summarize information from large amounts of raw data. It is one of the key technologies in many areas of economy, science, administration and the internet. In this report we introduce an approach for utilizing evolutionary algorithms to breed fuzzy classifier systems. This approach was exercised as part of a structured procedure by the students Achler, Göb and Voigtmann as contribution to the 2006 Data-Mining-Cup contest, yielding encouragingly positive results.
Resumo:
A conceptual information system consists of a database together with conceptual hierarchies. The management system TOSCANA visualizes arbitrary combinations of conceptual hierarchies by nested line diagrams and allows an on-line interaction with a database to analyze data conceptually. The paper describes the conception of conceptual information systems and discusses the use of their visualization techniques for on-line analytical processing (OLAP).
Resumo:
While most data analysis and decision support tools use numerical aspects of the data, Conceptual Information Systems focus on their conceptual structure. This paper discusses how both approaches can be combined.
Resumo:
We present a new algorithm called TITANIC for computing concept lattices. It is based on data mining techniques for computing frequent itemsets. The algorithm is experimentally evaluated and compared with B. Ganter's Next-Closure algorithm.
Resumo:
In this paper, we discuss Conceptual Knowledge Discovery in Databases (CKDD) in its connection with Data Analysis. Our approach is based on Formal Concept Analysis, a mathematical theory which has been developed and proven useful during the last 20 years. Formal Concept Analysis has led to a theory of conceptual information systems which has been applied by using the management system TOSCANA in a wide range of domains. In this paper, we use such an application in database marketing to demonstrate how methods and procedures of CKDD can be applied in Data Analysis. In particular, we show the interplay and integration of data mining and data analysis techniques based on Formal Concept Analysis. The main concern of this paper is to explain how the transition from data to knowledge can be supported by a TOSCANA system. To clarify the transition steps we discuss their correspondence to the five levels of knowledge representation established by R. Brachman and to the steps of empirically grounded theory building proposed by A. Strauss and J. Corbin.
Resumo:
Formal Concept Analysis is an unsupervised learning technique for conceptual clustering. We introduce the notion of iceberg concept lattices and show their use in Knowledge Discovery in Databases (KDD). Iceberg lattices are designed for analyzing very large databases. In particular they serve as a condensed representation of frequent patterns as known from association rule mining. In order to show the interplay between Formal Concept Analysis and association rule mining, we discuss the algorithm TITANIC. We show that iceberg concept lattices are a starting point for computing condensed sets of association rules without loss of information, and are a visualization method for the resulting rules.
Resumo:
Among many other knowledge representations formalisms, Ontologies and Formal Concept Analysis (FCA) aim at modeling ‘concepts’. We discuss how these two formalisms may complement another from an application point of view. In particular, we will see how FCA can be used to support Ontology Engineering, and how ontologies can be exploited in FCA applications. The interplay of FCA and ontologies is studied along the life cycle of an ontology: (i) FCA can support the building of the ontology as a learning technique. (ii) The established ontology can be analyzed and navigated by using techniques of FCA. (iii) Last but not least, the ontology may be used to improve an FCA application.