941 resultados para open source seismic data processing packages
Resumo:
Data processing services for Meteosat geostationary satellite are presented. Implemented services correspond to the different levels of remote-sensing data processing, including noise reduction at preprocessing level, cloud mask extraction at low-level and fractal dimension estimation at high-level. Cloud mask obtained as a result of Markovian segmentation of infrared data. To overcome high computation complexity of Markovian segmentation parallel algorithm is developed. Fractal dimension of Meteosat data estimated using fractional Brownian motion models.
Resumo:
Open source software (OSS) popularity is growing steadily and many OSS systems could be used to preserve cultural heritage objects. Such solutions give the opportunity to organizations to afford the development of a digital collection. This paper focuses on reviewing two OSS tools, CollectionSpace and the Open Video Digital Library Toolkit and discuss on how these could be used for organizing digital replicas of cultural objects. The features of the software are presented and some examples are given.
Resumo:
As massive data sets become increasingly available, people are facing the problem of how to effectively process and understand these data. Traditional sequential computing models are giving way to parallel and distributed computing models, such as MapReduce, both due to the large size of the data sets and their high dimensionality. This dissertation, as in the same direction of other researches that are based on MapReduce, tries to develop effective techniques and applications using MapReduce that can help people solve large-scale problems. Three different problems are tackled in the dissertation. The first one deals with processing terabytes of raster data in a spatial data management system. Aerial imagery files are broken into tiles to enable data parallel computation. The second and third problems deal with dimension reduction techniques that can be used to handle data sets of high dimensionality. Three variants of the nonnegative matrix factorization technique are scaled up to factorize matrices of dimensions in the order of millions in MapReduce based on different matrix multiplication implementations. Two algorithms, which compute CANDECOMP/PARAFAC and Tucker tensor decompositions respectively, are parallelized in MapReduce based on carefully partitioning the data and arranging the computation to maximize data locality and parallelism.
Resumo:
Florida International University has undergone a reform in the introductory physics classes by focusing on the laboratory component of these classes. We present results from the secondary implementation of two research-based instructional strategies: the implementation of the Learning Assistant model as developed by the University of Colorado at Boulder and the Open Source Tutorial curriculum developed at the University of Maryland, College Park. We examine the results of the Force Concept Inventory (FCI) for introductory students over five years (n=872) and find that the mean raw gain of students in transformed lab sections was 0.243, while the mean raw gain of the traditional labs was 0.159, with a Cohen’s d effect size of 0.59. Average raw gains on the FCI were 0.243 for Hispanic students and 0.213 for women in the transformed labs, indicating that these reforms are not widening the gaps between underrepresented student groups and majority groups. Our results illustrate how research-based instructional strategies can be successfully implemented in a physics department with minimal department engagement and in a sustainable manner.
Resumo:
This thesis describes the development of an open-source system for virtual bronchoscopy used in combination with electromagnetic instrument tracking. The end application is virtual navigation of the lung for biopsy of early stage cancer nodules. The open-source platform 3D Slicer was used for creating freely available algorithms for virtual bronchscopy. Firstly, the development of an open-source semi-automatic algorithm for prediction of solitary pulmonary nodule malignancy is presented. This approach may help the physician decide whether to proceed with biopsy of the nodule. The user-selected nodule is segmented in order to extract radiological characteristics (i.e., size, location, edge smoothness, calcification presence, cavity wall thickness) which are combined with patient information to calculate likelihood of malignancy. The overall accuracy of the algorithm is shown to be high compared to independent experts' assessment of malignancy. The algorithm is also compared with two different predictors, and our approach is shown to provide the best overall prediction accuracy. The development of an airway segmentation algorithm which extracts the airway tree from surrounding structures on chest Computed Tomography (CT) images is then described. This represents the first fundamental step toward the creation of a virtual bronchoscopy system. Clinical and ex-vivo images are used to evaluate performance of the algorithm. Different CT scan parameters are investigated and parameters for successful airway segmentation are optimized. Slice thickness is the most affecting parameter, while variation of reconstruction kernel and radiation dose is shown to be less critical. Airway segmentation is used to create a 3D rendered model of the airway tree for virtual navigation. Finally, the first open-source virtual bronchoscopy system was combined with electromagnetic tracking of the bronchoscope for the development of a GPS-like system for navigating within the lungs. Tools for pre-procedural planning and for helping with navigation are provided. Registration between the lungs of the patient and the virtually reconstructed airway tree is achieved using a landmark-based approach. In an attempt to reduce difficulties with registration errors, we also implemented a landmark-free registration method based on a balanced airway survey. In-vitro and in-vivo testing showed good accuracy for this registration approach. The centreline of the 3D airway model is extracted and used to compensate for possible registration errors. Tools are provided to select a target for biopsy on the patient CT image, and pathways from the trachea towards the selected targets are automatically created. The pathways guide the physician during navigation, while distance to target information is updated in real-time and presented to the user. During navigation, video from the bronchoscope is streamed and presented to the physician next to the 3D rendered image. The electromagnetic tracking is implemented with 5 DOF sensing that does not provide roll rotation information. An intensity-based image registration approach is implemented to rotate the virtual image according to the bronchoscope's rotations. The virtual bronchoscopy system is shown to be easy to use and accurate in replicating the clinical setting, as demonstrated in the pre-clinical environment of a breathing lung method. Animal studies were performed to evaluate the overall system performance.
Resumo:
The generation of heterogeneous big data sources with ever increasing volumes, velocities and veracities over the he last few years has inspired the data science and research community to address the challenge of extracting knowledge form big data. Such a wealth of generated data across the board can be intelligently exploited to advance our knowledge about our environment, public health, critical infrastructure and security. In recent years we have developed generic approaches to process such big data at multiple levels for advancing decision-support. It specifically concerns data processing with semantic harmonisation, low level fusion, analytics, knowledge modelling with high level fusion and reasoning. Such approaches will be introduced and presented in context of the TRIDEC project results on critical oil and gas industry drilling operations and also the ongoing large eVacuate project on critical crowd behaviour detection in confined spaces.
Resumo:
The Data Processing Department of ISHC has developed coding forms to be used for the data to be entered into the program. The Highway Planning and Programming and the Design Departments are responsible for coding and submitting the necessary data forms to Data Processing for the noise prediction on the highway sections.
Resumo:
Através do processamento de dados sísmicos convertem-se registos de campo em secções sísmicas com significado geológico, que revelam informações e ajudam a delinear as camadas geológicas do subsolo e identificar estruturas soterradas. Portanto, a interpretação dos dados sísmicos só é boa se o processamento também o for. Este trabalho é resultado de um estágio curricular na empresa de prospecção geofísica GeoSurveys, que consistiu principalmente em processar 18 linhas de dados de sísmica de reflexão multicanal de alta resolução adquiridas na ilha de Pulau Tekong em Singapura, que têm como finalidade investigação do solo da baia desta mesma ilha. Estes dados foram cedidos à GeoSurveys para fins académicos, caso em que se inclui esta dissertação. Para atingir os objectivos propostos que consistiam em avaliar o impacto das condições de operação na qualidade do sinal sísmico e interpretação das linhas, fez-se o processamento das linhas utilizando um fluxo processamento padrão utilizado na empresa, com recurso ao software Radex Pro. Este fluxo de processamento tem como mais-valia o método de correcções estáticas, o UHRS trim statics, além das técnicas habituais utilizadas para melhorar a resolução das secções sísmicas como é o caso da desconvolução, a atenuação de ruído através do stacking, correcções de NMO, e migração, entre outras técnicas. A interpretação das linhas sísmicas processadas foi feita no software Kingdom Suite (IHS), através da distinção da configuração interna dos reflectores em cada secção sísmica, estabelecendo deste modo as principais unidades sismo-estratigráficas e identificando as zonas de interface que delimitam os horizontes principais. Foi feito ainda um estudo geológico sumário da área de pesquisa e da evolução geodinâmica da região.
Resumo:
New morpho-bathymetric and tectono-stratigraphic data on Naples and Salerno Gulfs, derived from bathymetric and seismic data analysis and integrated geologic interpretation are here presented. The CUBE(Combined Uncertainty Bathymetric Estimator) method has been applied to complex morphologies, such as the Capri continental slope and the related geological structures occurring in the Salerno Gulf.The bathymetric data analysis has been carried out for marine geological maps of the whole Campania continental margin at scales ranging from 1:25.000 to 1:10.000, including focused examples in Naples and Salerno Gulfs, Naples harbour, Capri and Ischia Islands and Salerno Valley. Seismic data analysis has allowed for the correlation of main morpho-structural lineaments recognized at a regional scale through multichannel profiles with morphological features cropping out at the sea bottom, evident from bathymetry.Main fault systems in the area have been represented on a tectonic sketch map, including the master fault located northwards to the Salerno Valley half graben. Some normal faults parallel to the master fault have been interpreted from the slope map derived from bathymetric data. A complex system of antithetic faults bound two morpho-structural highs located 20km to the south of the Capri Island. Some hints of compressional reactivation of normal faults in an extensional setting involving the whole Campania continental margin have been shown from seismic interpretation.
Resumo:
The structure of the Moroccan and Nova Scotia conjugate rifted margins is of key importance for understanding the Mesozoic break-up and evolution of the northern central Atlantic Ocean basin. Seven combined multichannel reflection (MCS) and wide-angle seismic (OBS) data profiles were acquired along the Atlantic Moroccan margin between the latitudes of 31.5° and 33° N during the MIRROR seismic survey in 2011, in order to image the transition from continental to oceanic crust, to study the variation in crustal structure and to characterize the crust under the West African Coast Magnetic Anomaly (WACMA). The data were modeled using a forward modeling approach. The final models image crustal thinning from 36 km thickness below the continent to approximately 8 km in the oceanic domain. A 100 km wide zone characterized by rough basement topography and high seismic velocities up to 7.4 km/s in the lower crust is observed westward of the West African Coast Magnetic Anomaly. No basin underlain by continental crust has been imaged in this region, as has been identified north of our study area. Comparison to the conjugate Nova Scotian margin shows a similar continental crustal thickness and layer geometry, and the existence of exhumed and serpentinized upper mantle material on the Canadian side only. The oceanic crustal thickness is lower on the Canadian margin.
Resumo:
In diesem Beitrag geht es um das Projekt E-Lernen auf der ILIAS-Plattform an der Universität der Bundeswehr Hamburg (E-L I-P UniBwH). Ziel des Projekts ist es, die Präsenzlehre mit dem Einsatz elektronischer Medien zu unterstützen. Im Beitrag werden Ansatzpunkte dargestellt, die die Lehrenden zum Gebrauch der E-Lernplattform motivieren. Es werden Bedarfsmöglichkeiten aufgezeigt, für die eine E-Lernplattform eine mögliche Lösung sein kann, sowie die Rahmenbedingungen benannt, unter denen sie eingesetzt wird. Ziel ist es, die Nachhaltigkeit des E-Lernens an der UniBwH zu fördern.(DIPF/Orig.)
Resumo:
As an emerging innovation paradigm gaining momentum in recent years, the open innovation paradigm is calling for greater theoretical depth and more empirical research. This dissertation proposes that open innovation in the context of open source software sponsorship may be viewed as knowledge strategies of the firm. Hence, this dissertation examines the performance determinants of open innovation through the lens of knowledge-based perspectives. Using event study and regression methodologies, this dissertation found that these open source software sponsorship events can indeed boost the stock market performance of US public firms. In addition, both the knowledge capabilities of the firms and the knowledge profiles of the open source projects they sponsor matter for performance. In terms of firm knowledge capabilities, internet service firms perform better than other firms owing to their advantageous complementary capabilities. Also, strong knowledge exploitation capabilities of the firm are positively associated with performance. In terms of the knowledge profile of sponsored projects, platform projects perform better than component projects. Also, community-originated projects outperform firm-originated projects. Finally, based on these findings, this dissertation discussed the important theoretical implications for the strategic tradeoff between knowledge protection and sharing.
Resumo:
Presentation from the MARAC conference in Boston, MA on March 18-21, 2015. S. 24 - DIY Archives: Enhancing Access to Collections via Free, Open-Source Platforms