921 resultados para seismic data processing
Resumo:
In vielen Industriezweigen, zum Beispiel in der Automobilindustrie, werden Digitale Versuchsmodelle (Digital MockUps) eingesetzt, um die Konstruktion und die Funktion eines Produkts am virtuellen Prototypen zu überprüfen. Ein Anwendungsfall ist dabei die Überprüfung von Sicherheitsabständen einzelner Bauteile, die sogenannte Abstandsanalyse. Ingenieure ermitteln dabei für bestimmte Bauteile, ob diese in ihrer Ruhelage sowie während einer Bewegung einen vorgegeben Sicherheitsabstand zu den umgebenden Bauteilen einhalten. Unterschreiten Bauteile den Sicherheitsabstand, so muss deren Form oder Lage verändert werden. Dazu ist es wichtig, die Bereiche der Bauteile, welche den Sicherhabstand verletzen, genau zu kennen. rnrnIn dieser Arbeit präsentieren wir eine Lösung zur Echtzeitberechnung aller den Sicherheitsabstand unterschreitenden Bereiche zwischen zwei geometrischen Objekten. Die Objekte sind dabei jeweils als Menge von Primitiven (z.B. Dreiecken) gegeben. Für jeden Zeitpunkt, in dem eine Transformation auf eines der Objekte angewendet wird, berechnen wir die Menge aller den Sicherheitsabstand unterschreitenden Primitive und bezeichnen diese als die Menge aller toleranzverletzenden Primitive. Wir präsentieren in dieser Arbeit eine ganzheitliche Lösung, welche sich in die folgenden drei großen Themengebiete unterteilen lässt.rnrnIm ersten Teil dieser Arbeit untersuchen wir Algorithmen, die für zwei Dreiecke überprüfen, ob diese toleranzverletzend sind. Hierfür präsentieren wir verschiedene Ansätze für Dreiecks-Dreiecks Toleranztests und zeigen, dass spezielle Toleranztests deutlich performanter sind als bisher verwendete Abstandsberechnungen. Im Fokus unserer Arbeit steht dabei die Entwicklung eines neuartigen Toleranztests, welcher im Dualraum arbeitet. In all unseren Benchmarks zur Berechnung aller toleranzverletzenden Primitive beweist sich unser Ansatz im dualen Raum immer als der Performanteste.rnrnDer zweite Teil dieser Arbeit befasst sich mit Datenstrukturen und Algorithmen zur Echtzeitberechnung aller toleranzverletzenden Primitive zwischen zwei geometrischen Objekten. Wir entwickeln eine kombinierte Datenstruktur, die sich aus einer flachen hierarchischen Datenstruktur und mehreren Uniform Grids zusammensetzt. Um effiziente Laufzeiten zu gewährleisten ist es vor allem wichtig, den geforderten Sicherheitsabstand sinnvoll im Design der Datenstrukturen und der Anfragealgorithmen zu beachten. Wir präsentieren hierzu Lösungen, die die Menge der zu testenden Paare von Primitiven schnell bestimmen. Darüber hinaus entwickeln wir Strategien, wie Primitive als toleranzverletzend erkannt werden können, ohne einen aufwändigen Primitiv-Primitiv Toleranztest zu berechnen. In unseren Benchmarks zeigen wir, dass wir mit unseren Lösungen in der Lage sind, in Echtzeit alle toleranzverletzenden Primitive zwischen zwei komplexen geometrischen Objekten, bestehend aus jeweils vielen hunderttausend Primitiven, zu berechnen. rnrnIm dritten Teil präsentieren wir eine neuartige, speicheroptimierte Datenstruktur zur Verwaltung der Zellinhalte der zuvor verwendeten Uniform Grids. Wir bezeichnen diese Datenstruktur als Shrubs. Bisherige Ansätze zur Speicheroptimierung von Uniform Grids beziehen sich vor allem auf Hashing Methoden. Diese reduzieren aber nicht den Speicherverbrauch der Zellinhalte. In unserem Anwendungsfall haben benachbarte Zellen oft ähnliche Inhalte. Unser Ansatz ist in der Lage, den Speicherbedarf der Zellinhalte eines Uniform Grids, basierend auf den redundanten Zellinhalten, verlustlos auf ein fünftel der bisherigen Größe zu komprimieren und zur Laufzeit zu dekomprimieren.rnrnAbschießend zeigen wir, wie unsere Lösung zur Berechnung aller toleranzverletzenden Primitive Anwendung in der Praxis finden kann. Neben der reinen Abstandsanalyse zeigen wir Anwendungen für verschiedene Problemstellungen der Pfadplanung.
Resumo:
Reflection seismic data from the F3 block in the Dutch North Sea exhibits many large-amplitude reflections at shallow horizons, typically categorized as “brightspots ” (Schroot and Schuttenhelm, 2003), mainly because of their bright appearance. In most cases, these bright reflections show a significant “flatness” contrasting with local structural trends. While flatspots are often easily identified in thick reservoirs, we have often occasionally observed apparent flatspot tuning effects at fluid contacts near reservoir edges and in thin reservoir beds, while only poorly understanding them. We conclude that many of the shallow large-amplitude reflections in block F3 are dominated by flatspots, and we investigate the thin-bed tuning effects that such flatspots cause as they interact with the reflection from the reservoir’s upper boundary. There are two possible effects to be considered: (1) the “wedge-model” tuning effects of the flatspot and overlying brightspots, dimspots, or polarity-reversals; and (2) the stacking effects that result from possible inclusion of post-critical flatspot reflections in these shallow sands. We modeled the effects of these two phenomena for the particular stratigraphic sequence in block F3. Our results suggest that stacking of post-critical flatspot reflections can cause similar large-amplitude but flat reflections, in some cases even causing an interface expected to produce a ‘dimspot’ to appear as a ‘brightspot’. Analysis of NMO stretch and muting shows the likely exclusion of critical offset data in stacked output. If post-critical reflections are included in stacking, unusual results will be observed. In the North Sea case, we conclude the tuning effect was the primary reason causing for the brightness and flatness of these reflections. However, it is still important to note that care should be taken while applying muting on reflections with wide range of incidence angles and the inclusion of critical offset data may cause some spurious features in the stacked section.
Resumo:
Applying location-focused data protection law within the context of a location-agnostic cloud computing framework is fraught with difficulties. While the Proposed EU Data Protection Regulation has introduced a lot of changes to the current data protection framework, the complexities of data processing in the cloud involve various layers and intermediaries of actors that have not been properly addressed. This leaves some gaps in the regulation when analyzed in cloud scenarios. This paper gives a brief overview of the relevant provisions of the regulation that will have an impact on cloud transactions and addresses the missing links. It is hoped that these loopholes will be reconsidered before the final version of the law is passed in order to avoid unintended consequences.
Resumo:
Abstract. Ancient Lake Ohrid is a steep-sided, oligotrophic, karst lake that was tectonically formed most likely within the Pliocene and often referred to as a hotspot of endemic biodiversity. This study aims on tracing significant lake level fluctuations at Lake Ohrid using high-resolution acoustic data in combination with lithological, geochemical, and chronological information from two sediment cores recovered from sub-aquatic terrace levels at ca. 32 and 60m water depth. According to our data, significant lake level fluctuations with prominent lowstands of ca. 60 and 35m below the present water level occurred during Marine Isotope Stage (MIS) 6 and MIS 5, respectively. The effect of these lowstands on biodiversity in most coastal parts of the lake is negligible, due to only small changes in lake surface area, coastline, and habitat. In contrast, biodiversity in shallower areas was more severely affected due to disconnection of today sublacustrine springs from the main water body. Multichannel seismic data from deeper parts of the lake clearly image several clinoform structures stacked on top of each other. These stacked clinoforms indicate significantly lower lake levels prior to MIS 6 and a stepwise rise of water level with intermittent stillstands since its existence as water-filled body, which might have caused enhanced expansion of endemic species within Lake Ohrid.
Resumo:
The article proposes granular computing as a theoretical, formal and methodological basis for the newly emerging research field of human–data interaction (HDI). We argue that the ability to represent and reason with information granules is a prerequisite for data legibility. As such, it allows for extending the research agenda of HDI to encompass the topic of collective intelligence amplification, which is seen as an opportunity of today’s increasingly pervasive computing environments. As an example of collective intelligence amplification in HDI, we introduce a collaborative urban planning use case in a cognitive city environment and show how an iterative process of user input and human-oriented automated data processing can support collective decision making. As a basis for automated human-oriented data processing, we use the spatial granular calculus of granular geometry.
Resumo:
Navigation of deep space probes is most commonly operated using the spacecraft Doppler tracking technique. Orbital parameters are determined from a series of repeated measurements of the frequency shift of a microwave carrier over a given integration time. Currently, both ESA and NASA operate antennas at several sites around the world to ensure the tracking of deep space probes. Just a small number of software packages are nowadays used to process Doppler observations. The Astronomical Institute of the University of Bern (AIUB) has recently started the development of Doppler data processing capabilities within the Bernese GNSS Software. This software has been extensively used for Precise Orbit Determination of Earth orbiting satellites using GPS data collected by on-board receivers and for subsequent determination of the Earth gravity field. In this paper, we present the currently achieved status of the Doppler data modeling and orbit determination capabilities in the Bernese GNSS Software using GRAIL data. In particular we will focus on the implemented orbit determination procedure used for the combined analysis of Doppler and intersatellite Ka-band data. We show that even at this earlier stage of the development we can achieve an accuracy of few mHz on two-way S-band Doppler observation and of 2 µm/s on KBRR data from the GRAIL primary mission phase.
Resumo:
A wide variety of spatial data collection efforts are ongoing throughout local, state and federal agencies, private firms and non-profit organizations. Each effort is established for a different purpose but organizations and individuals often collect and maintain the same or similar information. The United States federal government has undertaken many initiatives such as the National Spatial Data Infrastructure, the National Map and Geospatial One-Stop to reduce duplicative spatial data collection and promote the coordinated use, sharing, and dissemination of spatial data nationwide. A key premise in most of these initiatives is that no national government will be able to gather and maintain more than a small percentage of the geographic data that users want and desire. Thus, national initiatives depend typically on the cooperation of those already gathering spatial data and those using GIs to meet specific needs to help construct and maintain these spatial data infrastructures and geo-libraries for their nations (Onsrud 2001). Some of the impediments to widespread spatial data sharing are well known from directly asking GIs data producers why they are not currently involved in creating datasets that are of common or compatible formats, documenting their datasets in a standardized metadata format or making their datasets more readily available to others through Data Clearinghouses or geo-libraries. The research described in this thesis addresses the impediments to wide-scale spatial data sharing faced by GIs data producers and explores a new conceptual data-sharing approach, the Public Commons for Geospatial Data, that supports user-friendly metadata creation, open access licenses, archival services and documentation of parent lineage of the contributors and value- adders of digital spatial data sets.
Resumo:
Clinical Research Data Quality Literature Review and Pooled Analysis We present a literature review and secondary analysis of data accuracy in clinical research and related secondary data uses. A total of 93 papers meeting our inclusion criteria were categorized according to the data processing methods. Quantitative data accuracy information was abstracted from the articles and pooled. Our analysis demonstrates that the accuracy associated with data processing methods varies widely, with error rates ranging from 2 errors per 10,000 files to 5019 errors per 10,000 fields. Medical record abstraction was associated with the highest error rates (70–5019 errors per 10,000 fields). Data entered and processed at healthcare facilities had comparable error rates to data processed at central data processing centers. Error rates for data processed with single entry in the presence of on-screen checks were comparable to double entered data. While data processing and cleaning methods may explain a significant amount of the variability in data accuracy, additional factors not resolvable here likely exist. Defining Data Quality for Clinical Research: A Concept Analysis Despite notable previous attempts by experts to define data quality, the concept remains ambiguous and subject to the vagaries of natural language. This current lack of clarity continues to hamper research related to data quality issues. We present a formal concept analysis of data quality, which builds on and synthesizes previously published work. We further posit that discipline-level specificity may be required to achieve the desired definitional clarity. To this end, we combine work from the clinical research domain with findings from the general data quality literature to produce a discipline-specific definition and operationalization for data quality in clinical research. While the results are helpful to clinical research, the methodology of concept analysis may be useful in other fields to clarify data quality attributes and to achieve operational definitions. Medical Record Abstractor’s Perceptions of Factors Impacting the Accuracy of Abstracted Data Medical record abstraction (MRA) is known to be a significant source of data errors in secondary data uses. Factors impacting the accuracy of abstracted data are not reported consistently in the literature. Two Delphi processes were conducted with experienced medical record abstractors to assess abstractor’s perceptions about the factors. The Delphi process identified 9 factors that were not found in the literature, and differed with the literature by 5 factors in the top 25%. The Delphi results refuted seven factors reported in the literature as impacting the quality of abstracted data. The results provide insight into and indicate content validity of a significant number of the factors reported in the literature. Further, the results indicate general consistency between the perceptions of clinical research medical record abstractors and registry and quality improvement abstractors. Distributed Cognition Artifacts on Clinical Research Data Collection Forms Medical record abstraction, a primary mode of data collection in secondary data use, is associated with high error rates. Distributed cognition in medical record abstraction has not been studied as a possible explanation for abstraction errors. We employed the theory of distributed representation and representational analysis to systematically evaluate cognitive demands in medical record abstraction and the extent of external cognitive support employed in a sample of clinical research data collection forms. We show that the cognitive load required for abstraction in 61% of the sampled data elements was high, exceedingly so in 9%. Further, the data collection forms did not support external cognition for the most complex data elements. High working memory demands are a possible explanation for the association of data errors with data elements requiring abstractor interpretation, comparison, mapping or calculation. The representational analysis used here can be used to identify data elements with high cognitive demands.
Resumo:
High-resolution, multichannel seismic data collected across the Great Bahama Bank margin and the adjacent Straits of Florida indicate that the deposition of Neogene-Quaternary strata in this transect are controlled by two sedimentation mechanisms: (1) west-dipping layers of the platform margin, which are a product of sea-level-controlled, platform-derived downslope sedimentation; and (2) east- or north-dipping drift deposits in the basinal areas, which are deposited by ocean currents. These two sediment systems are active simultaneously and interfinger at the toe-of-slope. The prograding system consists of sigmoidal clinoforms that advanced the margin some 25 km into the Straits of Florida. The foresets of the clinoforms are approximately 600 m high with variable slope angles that steepen significantly in the Pleistocene section. The seismic facies of the prograding clinoforms on the slope is characterized by dominant, partly chaotic, cut-and-fill geometries caused by submarine canyons that are oriented downslope. In the basin axis, seismic geometries and facies document deposition from and by currents. Most impressive is an 800-m-thick drift deposit at the confluence of the Santaren Channel and the Straits of Florida. This "Santaren Drift" is slightly asymmetric, thinning to the north. The drift displays a highly coherent seismic facies characterized by a continuous succession of reflections, indicating very regular sedimentation. Leg 166 of the Ocean Drilling Program (ODP) drilled a transect of five deep holes between 2 and 30 km from the modern platform margin and retrieved the sediments from both the slope and basin systems. The Neogene slope sediments consist of peri-platform oozes intercalated with turbidites, whereas the basinal drift deposits consist of more homogeneous, fine-grained carbonates that were deposited without major hiatuses by the Florida Current starting at approximately 12.4 Ma. Sea-level fluctuations, which controlled the carbonate production on Great Bahama Bank by repeated exposure of the platform top, controlled lithologic alternations and hiatuses in sedimentation across the transect. Both sedimentary systems are contained in 17 seismic sequences that were identified in the Neogene-Quaternary section. Seismic sequence boundaries were identified based on geometric unconformities beneath the Great Bahama Bank. All the sequence boundaries could be traced across the entire transect into the Straits of Florida. Biostratigraphic age determinations of seismic reflections indicate that the seismic reflections of sequence boundaries have chronostratigraphic significance across both depositional environments.
Resumo:
We report the northernmost and deepest known occurrence of deep-water pycnodontine oysters, based on two surveys along the French Atlantic continental margin to the La Chapelle continental slope (2006) and the Guilvinec Canyon (2008). The combined use of multibeam bathymetry, seismic profiling, CTD casts and a remotely operated vehicle (ROV) made it possible to describe the physical habitat and to assess the oceanographic control for the recently described species Neopycnodonte zibrowii. These oysters have been observed in vivo in depths from 540 to 846 m, colonizing overhanging banks or escarpments protruding from steep canyon flanks. Especially in the Bay of Biscay, such physical habitats may only be observed within canyons, where they are created by both long-term turbiditic and contouritic processes. Frequent observations of sand ripples on the seabed indicate the presence of a steady, but enhanced bottom current of about 40 cm/s. The occurrence of oysters also coincides with the interface between the Eastern North Atlantic Water and the Mediterranean Outflow Water. A combination of this water mass mixing, internal tide generation and a strong primary surface productivity may generate an enhanced nutrient flux, which is funnelled through the canyon. When the ideal environmental conditions are met, up to 100 individuals per m² may be observed. These deep-water oysters require a vertical habitat, which is often incompatible with the requirements of other sessile organisms, and are only sparsely distributed along the continental margins. The discovery of these giant oyster banks illustrates the rich biodiversity of deep-sea canyons and their underestimation as true ecosystem hotspots.
Resumo:
In recent years, profiling floats, which form the basis of the successful international Argo observatory, are also being considered as platforms for marine biogeochemical research. This study showcases the utility of floats as a novel tool for combined gas measurements of CO2 partial pressure (pCO2) and O2. These float prototypes were equipped with a small-sized and submersible pCO2 sensor and an optode O2 sensor for highresolution measurements in the surface ocean layer. Four consecutive deployments were carried out during November 2010 and June 2011 near the Cape Verde Ocean Observatory (CVOO) in the eastern tropical North Atlantic. The profiling float performed upcasts every 31 h while measuring pCO2, O2, salinity, temperature, and hydrostatic pressure in the upper 200 m of the water column. To maintain accuracy, regular pCO2 sensor zeroings at depth and surface, as well as optode measurements in air, were performed for each profile. Through the application of data processing procedures (e.g., time-lag correction), accuracies of floatborne pCO2 measurements were greatly improved (10-15 µatm for the water column and 5 µatm for surface measurements). O2 measurements yielded an accuracy of 2 µmol/kg. First results of this pilot study show the possibility of using profiling floats as a platform for detailed and unattended observations of the marine carbon and oxygen cycle dynamics.