935 resultados para open source seismic data processing packages


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The seismic data were acquired north of the Knipovich Ridge on the western Svalbard margin during cruise MSM21/4. They were recorded using a Geometrics GeoEel streamer of either 120 channels (profiles p100-p208) or 88 channels (profiles p300-p805) with a group spacing of 1.56 m and a sampling rate of 2 kHz. A GI-Gun (2×1.7 l) with a main frequency of ~150 Hz was used as a source and operated at a shot interval of 6-8 s. Processing of profiles p100-p208 and p600-p805: Positions for each channel were calculated by backtracking along the profiles from the GI-Gun GPS positions. The shot gathers were analyzed for abnormal amplitudes below the seafloor reflection by comparing neighboring traces in different frequency bands within sliding time windows. To suppress surface-generated water noise, a tau-p filter was applied in the shot gather domain. Common mid-point (CMP) profiles were then generated through crooked-line binning with a CMP spacing of 1.5625 m. A zero-phase band-pass filter with corner frequencies of 60 Hz and 360 Hz was applied to the data. Based on regional velocity information from MCS data [Sarkar, 2012], an interpolated and extrapolated 3D interval velocity model was created below the digitized seafloor reflection of the high-resolution streamer data. This velocity model was used to apply a CMP stack and an amplitude-preserving Kirchhoff post-stack time migration. Processing of profiles p400-p500: Data were sampled at 0.5 ms and sorted into common midpoint (CMP) domain with a bin spacing of 5 m. Normal move out correction was carried out with a velocity of 1500 m s-1 and an Ormsby bandpass filter with corner frequencies at 40, 80, 600 and 1000 Hz was applied. The data were time migrated using the water velocity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background Gray scale images make the bulk of data in bio-medical image analysis, and hence, the main focus of many image processing tasks lies in the processing of these monochrome images. With ever improving acquisition devices, spatial and temporal image resolution increases, and data sets become very large. Various image processing frameworks exists that make the development of new algorithms easy by using high level programming languages or visual programming. These frameworks are also accessable to researchers that have no background or little in software development because they take care of otherwise complex tasks. Specifically, the management of working memory is taken care of automatically, usually at the price of requiring more it. As a result, processing large data sets with these tools becomes increasingly difficult on work station class computers. One alternative to using these high level processing tools is the development of new algorithms in a languages like C++, that gives the developer full control over how memory is handled, but the resulting workflow for the prototyping of new algorithms is rather time intensive, and also not appropriate for a researcher with little or no knowledge in software development. Another alternative is in using command line tools that run image processing tasks, use the hard disk to store intermediate results, and provide automation by using shell scripts. Although not as convenient as, e.g. visual programming, this approach is still accessable to researchers without a background in computer science. However, only few tools exist that provide this kind of processing interface, they are usually quite task specific, and don’t provide an clear approach when one wants to shape a new command line tool from a prototype shell script. Results The proposed framework, MIA, provides a combination of command line tools, plug-ins, and libraries that make it possible to run image processing tasks interactively in a command shell and to prototype by using the according shell scripting language. Since the hard disk becomes the temporal storage memory management is usually a non-issue in the prototyping phase. By using string-based descriptions for filters, optimizers, and the likes, the transition from shell scripts to full fledged programs implemented in C++ is also made easy. In addition, its design based on atomic plug-ins and single tasks command line tools makes it easy to extend MIA, usually without the requirement to touch or recompile existing code. Conclusion In this article, we describe the general design of MIA, a general purpouse framework for gray scale image processing. We demonstrated the applicability of the software with example applications from three different research scenarios, namely motion compensation in myocardial perfusion imaging, the processing of high resolution image data that arises in virtual anthropology, and retrospective analysis of treatment outcome in orthognathic surgery. With MIA prototyping algorithms by using shell scripts that combine small, single-task command line tools is a viable alternative to the use of high level languages, an approach that is especially useful when large data sets need to be processed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Monitoring land-cover changes on sites of conservation importance allows environmental problems to be detected, solutions to be developed and the effectiveness of actions to be assessed. However, the remoteness of many sites or a lack of resources means these data are frequently not available. Remote sensing may provide a solution, but large-scale mapping and change detection may not be appropriate, necessitating site-level assessments. These need to be easy to undertake, rapid and cheap. We present an example of a Web-based solution based on free and open-source software and standards (including PostGIS, OpenLayers, Web Map Services, Web Feature Services and GeoServer) to support assessments of land-cover change (and validation of global land-cover maps). Authorised users are provided with means to assess land-cover visually and may optionally provide uncertainty information at various levels: from a general rating of their confidence in an assessment to a quantification of the proportions of land-cover types within a reference area. Versions of this tool have been developed for the TREES-3 initiative (Simonetti, Beuchle and Eva, 2011). This monitors tropical land-cover change through ground-truthing at latitude / longitude degree confluence points, and for monitoring of change within and around Important Bird Areas (IBAs) by Birdlife International and the Royal Society for the Protection of Birds (RSPB). In this paper we present results from the second of these applications. We also present further details on the potential use of the land-cover change assessment tool on sites of recognised conservation importance, in combination with NDVI and other time series data from the eStation (a system for receiving, processing and disseminating environmental data). We show how the tool can be used to increase the usability of earth observation data by local stakeholders and experts, and assist in evaluating the impact of protection regimes on land-cover change.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Crosswell data set contains a range of angles limited only by the geometry of the source and receiver configuration, the separation of the boreholes and the depth to the target. However, the wide angles reflections present in crosswell imaging result in amplitude-versus-angle (AVA) features not usually observed in surface data. These features include reflections from angles that are near critical and beyond critical for many of the interfaces; some of these reflections are visible only for a small range of angles, presumably near their critical angle. High-resolution crosswell seismic surveys were conducted over a Silurian (Niagaran) reef at two fields in northern Michigan, Springdale and Coldspring. The Springdale wells extended to much greater depths than the reef, and imaging was conducted from above and from beneath the reef. Combining the results from images obtained from above with those from beneath provides additional information, by exhibiting ranges of angles that are different for the two images, especially for reflectors at shallow depths, and second, by providing additional constraints on the solutions for Zoeppritz equations. Inversion of seismic data for impedance has become a standard part of the workflow for quantitative reservoir characterization. Inversion of crosswell data using either deterministic or geostatistical methods can lead to poor results with phase change beyond the critical angle, however, the simultaneous pre-stack inversion of partial angle stacks may be best conducted with restrictions to angles less than critical. Deterministic inversion is designed to yield only a single model of elastic properties (best-fit), while the geostatistical inversion produces multiple models (realizations) of elastic properties, lithology and reservoir properties. Geostatistical inversion produces results with far more detail than deterministic inversion. The magnitude of difference in details between both types of inversion becomes increasingly pronounced for thinner reservoirs, particularly those beyond the vertical resolution of the seismic. For any interface imaged from above and from beneath, the results AVA characters must result from identical contrasts in elastic properties in the two sets of images, albeit in reverse order. An inversion approach to handle both datasets simultaneously, at pre-critical angles, is demonstrated in this work. The main exploration problem for carbonate reefs is determining the porosity distribution. Images of elastic properties, obtained from deterministic and geostatistical simultaneous inversion of a high-resolution crosswell seismic survey were used to obtain the internal structure and reservoir properties (porosity) of Niagaran Michigan reef. The images obtained are the best of any Niagaran pinnacle reef to date.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

How can we calculate earthquake magnitudes when the signal is clipped and over-run? When a volcano is very active, the seismic record may saturate (i.e., the full amplitude of the signal is not recorded) or be over-run (i.e., the end of one event is covered by the start of a new event). The duration, and sometimes the amplitude, of an earthquake signal are necessary for determining event magnitudes; thus, it may be impossible to calculate earthquake magnitudes when a volcano is very active. This problem is most likely to occur at volcanoes with limited networks of short period seismometers. This study outlines two methods for calculating earthquake magnitudes when events are clipped and over-run. The first method entails modeling the shape of earthquake codas as a power law function and extrapolating duration from the decay of the function. The second method draws relations between clipped duration (i.e., the length of time a signal is clipped) and the full duration. These methods allow for magnitudes to be determined within 0.2 to 0.4 units of magnitude. This error is within the range of analyst hand-picks and is within the acceptable limits of uncertainty when quickly quantifying volcanic energy release during volcanic crises. Most importantly, these estimates can be made when data are clipped or over-run. These methods were developed with data from the initial stages of the 2004-2008 eruption at Mount St. Helens. Mount St. Helens is a well-studied volcano with many instruments placed at varying distances from the vent. This fact makes the 2004-2008 eruption a good place to calibrate and refine methodologies that can be applied to volcanoes with limited networks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Open-source software systems have become a viable alternative to proprietary systems. We collected data on the usage of an open-source workflow management system developed by a university research group, and examined this data with a focus on how three different user cohorts – students, academics and industry professionals – develop behavioral intentions to use the system. Building upon a framework of motivational components, we examined the group differences in extrinsic versus intrinsic motivations on continued usage intentions. Our study provides a detailed understanding of the use of open-source workflow management systems in different user communities. Moreover, it discusses implications for the provision of workflow management systems, the user-specific management of open-source systems and the development of services in the wider user community.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper describes modelling, estimation and control of the horizontal translational motion of an open-source and cost effective quadcopter — the MikroKopter. We determine the dynamics of its roll and pitch attitude controller, system latencies, and the units associated with the values exchanged with the vehicle over its serial port. Using this we create a horizontal-plane velocity estimator that uses data from the built-in inertial sensors and an onboard laser scanner, and implement translational control using a nested control loop architecture. We present experimental results for the model and estimator, as well as closed-loop positioning.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper describes system identification, estimation and control of translational motion and heading angle for a cost effective open-source quadcopter — the MikroKopter. The dynamics of its built-in sensors, roll and pitch attitude controller, and system latencies are determined and used to design a computationally inexpensive multi-rate velocity estimator that fuses data from the built-in inertial sensors and a low-rate onboard laser range finder. Control is performed using a nested loop structure that is also computationally inexpensive and incorporates different sensors. Experimental results for the estimator and closed-loop positioning are presented and compared with ground truth from a motion capture system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

RatSLAM is a navigation system based on the neural processes underlying navigation in the rodent brain, capable of operating with low resolution monocular image data. Seminal experiments using RatSLAM include mapping an entire suburb with a web camera and a long term robot delivery trial. This paper describes OpenRatSLAM, an open-source version of RatSLAM with bindings to the Robot Operating System framework to leverage advantages such as robot and sensor abstraction, networking, data playback, and visualization. OpenRatSLAM comprises connected ROS nodes to represent RatSLAM’s pose cells, experience map, and local view cells, as well as a fourth node that provides visual odometry estimates. The nodes are described with reference to the RatSLAM model and salient details of the ROS implementation such as topics, messages, parameters, class diagrams, sequence diagrams, and parameter tuning strategies. The performance of the system is demonstrated on three publicly available open-source datasets.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Relevation! is a system for performing relevance judgements for information retrieval evaluation. Relevation! is web-based, fully configurable and expandable; it allows researchers to effectively collect assessments and additional qualitative data. The system is easily deployed allowing assessors to smoothly perform their relevance judging tasks, even remotely. Relevation! is available as an open source project at: http://ielab.github.io/relevation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background Sexually-transmitted pathogens often have severe reproductive health implications if treatment is delayed or absent, especially in females. The complex processes of disease progression, namely replication and ascension of the infection through the genital tract, span both extracellular and intracellular physiological scales, and in females can vary over the distinct phases of the menstrual cycle. The complexity of these processes, coupled with the common impossibility of obtaining comprehensive and sequential clinical data from individual human patients, makes mathematical and computational modelling valuable tools in developing our understanding of the infection, with a view to identifying new interventions. While many within-host models of sexually-transmitted infections (STIs) are available in existing literature, these models are difficult to deploy in clinical/experimental settings since simulations often require complex computational approaches. Results We present STI-GMaS (Sexually-Transmitted Infections – Graphical Modelling and Simulation), an environment for simulation of STI models, with a view to stimulating the uptake of these models within the laboratory or clinic. The software currently focuses upon the representative case-study of Chlamydia trachomatis, the most common sexually-transmitted bacterial pathogen of humans. Here, we demonstrate the use of a hybrid PDE–cellular automata model for simulation of a hypothetical Chlamydia vaccination, demonstrating the effect of a vaccine-induced antibody in preventing the infection from ascending to above the cervix. This example illustrates the ease with which existing models can be adapted to describe new studies, and its careful parameterisation within STI-GMaS facilitates future tuning to experimental data as they arise. Conclusions STI-GMaS represents the first software designed explicitly for in-silico simulation of STI models by non-theoreticians, thus presenting a novel route to bridging the gap between computational and clinical/experimental disciplines. With the propensity for model reuse and extension, there is much scope within STI-GMaS to allow clinical and experimental studies to inform model inputs and drive future model development. Many of the modelling paradigms and software design principles deployed to date transfer readily to other STIs, both bacterial and viral; forthcoming releases of STI-GMaS will extend the software to incorporate a more diverse range of infections.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

On 19 June 2015, representatives from over 40 Australian research institutions gathered in Canberra to launch their Open Data Collections. The one day event, hosted by the Australian National Data Service (ANDS), showcased to government and a range of national stakeholders the rich variety of data collections that have been generated through the Major Open Data Collections (MODC) project. Colin Eustace attended the showcase for QUT Library and presented a poster that reflected the work that he and Jodie Vaughan generated through the project. QUT’s Blueprint 4, the University’s five-year institutional strategic plan, outlines the key priorities of developing a commitment to working in partnership with industry, as well as combining disciplinary strengths with interdisciplinary application. The Division of Technology, Information and Learning Support (TILS) has undertaken a number of Australian National Data Service (ANDS) funded projects since 2009 with the aim of developing improved research data management services within the University to support these strategic aims. By leveraging existing tools and systems developed during these projects, the Major Open Data Collection (MODC) project delivered support to multi-disciplinary collaborative research activities through partnership building between QUT researchers and Queensland government agencies, in order to add to and promote the discovery and reuse of a collection of spatially referenced datasets. The MODC project built upon existing Research Data Finder infrastructure (which uses VIVO open source software, developed by Cornell University) to develop a separate collection, Spatial Data Finder (https://researchdatafinder.qut.edu.au/spatial) as the interface to display the spatial data collection. During the course of the project, 62 dataset descriptions were added to Spatial Data Finder, 7 added to Research Data Finder and two added to Software Finder, another separate collection. The project team met with 116 individual researchers and attended 13 school and faculty meetings to promote the MODC project and raise awareness of the Library’s services and resources for research data management.