885 resultados para open source seismic data processing packages
Resumo:
This thesis presents two different forms of the Born approximations for acoustic and elastic wavefields and discusses their application to the inversion of seismic data. The Born approximation is valid for small amplitude heterogeneities superimposed over a slowly varying background. The first method is related to frequency-wavenumber migration methods. It is shown to properly recover two independent acoustic parameters within the bandpass of the source time function of the experiment for contrasts of about 5 percent from data generated using an exact theory for flat interfaces. The independent determination of two parameters is shown to depend on the angle coverage of the medium. For surface data, the impedance profile is well recovered.
The second method explored is mathematically similar to iterative tomographic methods recently introduced in the geophysical literature. Its basis is an integral relation between the scattered wavefield and the medium parameters obtained after applying a far-field approximation to the first-order Born approximation. The Davidon-Fletcher-Powell algorithm is used since it converges faster than the steepest descent method. It consists essentially of successive backprojections of the recorded wavefield, with angular and propagation weighing coefficients for density and bulk modulus. After each backprojection, the forward problem is computed and the residual evaluated. Each backprojection is similar to a before-stack Kirchhoff migration and is therefore readily applicable to seismic data. Several examples of reconstruction for simple point scatterer models are performed. Recovery of the amplitudes of the anomalies are improved with successive iterations. Iterations also improve the sharpness of the images.
The elastic Born approximation, with the addition of a far-field approximation is shown to correspond physically to a sum of WKBJ-asymptotic scattered rays. Four types of scattered rays enter in the sum, corresponding to P-P, P-S, S-P and S-S pairs of incident-scattered rays. Incident rays propagate in the background medium, interacting only once with the scatterers. Scattered rays propagate as if in the background medium, with no interaction with the scatterers. An example of P-wave impedance inversion is performed on a VSP data set consisting of three offsets recorded in two wells.
Resumo:
BACKGROUND: The ability to write clearly and effectively is of central importance to the scientific enterprise. Encouraged by the success of simulation environments in other biomedical sciences, we developed WriteSim TCExam, an open-source, Web-based, textual simulation environment for teaching effective writing techniques to novice researchers. We shortlisted and modified an existing open source application - TCExam to serve as a textual simulation environment. After testing usability internally in our team, we conducted formal field usability studies with novice researchers. These were followed by formal surveys with researchers fitting the role of administrators and users (novice researchers) RESULTS: The development process was guided by feedback from usability tests within our research team. Online surveys and formal studies, involving members of the Research on Research group and selected novice researchers, show that the application is user-friendly. Additionally it has been used to train 25 novice researchers in scientific writing to date and has generated encouraging results. CONCLUSION: WriteSim TCExam is the first Web-based, open-source textual simulation environment designed to complement traditional scientific writing instruction. While initial reviews by students and educators have been positive, a formal study is needed to measure its benefits in comparison to standard instructional methods.
Resumo:
The Bioinformatics Open Source Conference (BOSC) is organized by the Open Bioinformatics Foundation (OBF), a nonprofit group dedicated to promoting the practice and philosophy of open source software development and open science within the biological research community. Since its inception in 2000, BOSC has provided bioinformatics developers with a forum for communicating the results of their latest efforts to the wider research community. BOSC offers a focused environment for developers and users to interact and share ideas about standards; software development practices; practical techniques for solving bioinformatics problems; and approaches that promote open science and sharing of data, results, and software. BOSC is run as a two-day special interest group (SIG) before the annual Intelligent Systems in Molecular Biology (ISMB) conference. BOSC 2015 took place in Dublin, Ireland, and was attended by over 125 people, about half of whom were first-time attendees. Session topics included "Data Science;" "Standards and Interoperability;" "Open Science and Reproducibility;" "Translational Bioinformatics;" "Visualization;" and "Bioinformatics Open Source Project Updates". In addition to two keynote talks and dozens of shorter talks chosen from submitted abstracts, BOSC 2015 included a panel, titled "Open Source, Open Door: Increasing Diversity in the Bioinformatics Open Source Community," that provided an opportunity for open discussion about ways to increase the diversity of participants in BOSC in particular, and in open source bioinformatics in general. The complete program of BOSC 2015 is available online at http://www.open-bio.org/wiki/BOSC_2015_Schedule.
Resumo:
Remote sensing airborne hyperspectral data are routinely used for applications including algorithm development for satellite sensors, environmental monitoring and atmospheric studies. Single flight lines of airborne hyperspectral data are often in the region of tens of gigabytes in size. This means that a single aircraft can collect terabytes of remotely sensed hyperspectral data during a single year. Before these data can be used for scientific analyses, they need to be radiometrically calibrated, synchronised with the aircraft's position and attitude and then geocorrected. To enable efficient processing of these large datasets the UK Airborne Research and Survey Facility has recently developed a software suite, the Airborne Processing Library (APL), for processing airborne hyperspectral data acquired from the Specim AISA Eagle and Hawk instruments. The APL toolbox allows users to radiometrically calibrate, geocorrect, reproject and resample airborne data. Each stage of the toolbox outputs data in the common Band Interleaved Lines (BILs) format, which allows its integration with other standard remote sensing software packages. APL was developed to be user-friendly and suitable for use on a workstation PC as well as for the automated processing of the facility; to this end APL can be used under both Windows and Linux environments on a single desktop machine or through a Grid engine. A graphical user interface also exists. In this paper we describe the Airborne Processing Library software, its algorithms and approach. We present example results from using APL with an AISA Eagle sensor and we assess its spatial accuracy using data from multiple flight lines collected during a campaign in 2008 together with in situ surveyed ground control points.
Resumo:
Remote sensing airborne hyperspectral data are routinely used for applications including algorithm development for satellite sensors, environmental monitoring and atmospheric studies. Single flight lines of airborne hyperspectral data are often in the region of tens of gigabytes in size. This means that a single aircraft can collect terabytes of remotely sensed hyperspectral data during a single year. Before these data can be used for scientific analyses, they need to be radiometrically calibrated, synchronised with the aircraft's position and attitude and then geocorrected. To enable efficient processing of these large datasets the UK Airborne Research and Survey Facility has recently developed a software suite, the Airborne Processing Library (APL), for processing airborne hyperspectral data acquired from the Specim AISA Eagle and Hawk instruments. The APL toolbox allows users to radiometrically calibrate, geocorrect, reproject and resample airborne data. Each stage of the toolbox outputs data in the common Band Interleaved Lines (BILs) format, which allows its integration with other standard remote sensing software packages. APL was developed to be user-friendly and suitable for use on a workstation PC as well as for the automated processing of the facility; to this end APL can be used under both Windows and Linux environments on a single desktop machine or through a Grid engine. A graphical user interface also exists. In this paper we describe the Airborne Processing Library software, its algorithms and approach. We present example results from using APL with an AISA Eagle sensor and we assess its spatial accuracy using data from multiple flight lines collected during a campaign in 2008 together with in situ surveyed ground control points.
Resumo:
This paper is on the implementation of a dual axis positioning system controller. The system was designed to be used for space-dependent ultrasound signal acquisition problems, such as pressure field mapping. The work developed can be grouped in two main subjects: hardware and software. Each axis includes one stepper motor connected to a driver circuit, which is then connected to a processing unit. The graphical user interface is simple and clear for the user. The system resolution was computed as 127 mu m with an accuracy of 2.44 mu m. Although the target application is ultrasound signal acquisition, the controller can be applied to other devices that has up to four stepper motors. The application was developed as an open source software, thus it can be used or changed to fit different purposes.
Resumo:
A importância dos sistemas de data warehousing e business intelligence é cada vez mais pronunciada, no sentido de dotar as organizações com a capacidade de guardar, explorar e produzir informação de valor acrescido para os seus processos de tomada de decisão. Esta realidade é claramente aplicável aos sectores da administração pública portuguesa e, muito em particular, aos organismos com responsabilidades centrais no Ministério da Saúde. No caso dos Serviços Partilhados do Ministério da Saúde (SPMS), que tem como missão prover o SNS de sistemas centrais de business intelligence, o apelo dos seus clientes, para que possam contar com capacidades analíticas nos seus sistemas centrais, tem sido sentido de forma muito acentuada. Todavia, é notório que, tanto os custos, como a complexidade, de grande parte destes projetos têm representado uma séria ameaça à sua adoção e sucesso. Por um lado, a administração pública tem recebido um forte encorajamento para integrar e adotar soluções de natureza open source (modelo de licenciamento gratuito), para os seus projetos de sistemas de informação. Por outro lado, temos vindo a assistir a uma vaga de aceitação generalizada de novas metodologias de desenvolvimento de projetos informáticos, nomeadamente no que diz respeito às metodologias Agéis, que se assumem como mais flexíveis, menos formais e com maior grau de sucesso. No sentido de averiguar da aplicabilidade do open source e das metodologias Ágeis aos sistemas de business intelligence, este trabalho documenta a implementação de um projeto organizacional para a SPMS, com recurso a ferramentas open source de licenciamento gratuito e através de uma metodologia de desenvolvimento de natureza Ágil.
Resumo:
Software systems are progressively being deployed in many facets of human life. The implication of the failure of such systems, has an assorted impact on its customers. The fundamental aspect that supports a software system, is focus on quality. Reliability describes the ability of the system to function under specified environment for a specified period of time and is used to objectively measure the quality. Evaluation of reliability of a computing system involves computation of hardware and software reliability. Most of the earlier works were given focus on software reliability with no consideration for hardware parts or vice versa. However, a complete estimation of reliability of a computing system requires these two elements to be considered together, and thus demands a combined approach. The present work focuses on this and presents a model for evaluating the reliability of a computing system. The method involves identifying the failure data for hardware components, software components and building a model based on it, to predict the reliability. To develop such a model, focus is given to the systems based on Open Source Software, since there is an increasing trend towards its use and only a few studies were reported on the modeling and measurement of the reliability of such products. The present work includes a thorough study on the role of Free and Open Source Software, evaluation of reliability growth models, and is trying to present an integrated model for the prediction of reliability of a computational system. The developed model has been compared with existing models and its usefulness of is being discussed.
Resumo:
Este artículo muestra cómo con bajo coste y riesgo se puede desarrollar un sistema de planificación de viaje multimodal, basado en un enfoque de código abierto y estándares ‘de facto’. Se ha desarrollado completamente una solución de código abierto para un sistema de información de transporte público puerta a puerta basado en estándares ‘de facto’. El cálculo de rutas se realiza mediante Graphserver, mientras que la cartografía se basa en OpenStreetMap. También se ha demostrado cómo exportar una base de datos real de horarios de transporte público como la del operador ETM (Empresa de Transporte Metropolitano de València) a la especificación de Google Transit, para permitir el cálculo de rutas, tanto desde nuestro prototipo como desde Google Transit
Resumo:
Cross-hole anisotropic electrical and seismic tomograms of fractured metamorphic rock have been obtained at a test site where extensive hydrological data were available. A strong correlation between electrical resistivity anisotropy and seismic compressional-wave velocity anisotropy has been observed. Analysis of core samples from the site reveal that the shale-rich rocks have fabric-related average velocity anisotropy of between 10% and 30%. The cross-hole seismic data are consistent with these values, indicating that observed anisotropy might be principally due to the inherent rock fabric rather than to the aligned sets of open fractures. One region with velocity anisotropy greater than 30% has been modelled as aligned open fractures within an anisotropic rock matrix and this model is consistent with available fracture density and hydraulic transmissivity data from the boreholes and the cross-hole resistivity tomography data. However, in general the study highlights the uncertainties that can arise, due to the relative influence of rock fabric and fluid-filled fractures, when using geophysical techniques for hydrological investigations.
Resumo:
GPS technology has been embedded into portable, low-cost electronic devices nowadays to track the movements of mobile objects. This implication has greatly impacted the transportation field by creating a novel and rich source of traffic data on the road network. Although the promise offered by GPS devices to overcome problems like underreporting, respondent fatigue, inaccuracies and other human errors in data collection is significant; the technology is still relatively new that it raises many issues for potential users. These issues tend to revolve around the following areas: reliability, data processing and the related application. This thesis aims to study the GPS tracking form the methodological, technical and practical aspects. It first evaluates the reliability of GPS based traffic data based on data from an experiment containing three different traffic modes (car, bike and bus) traveling along the road network. It then outline the general procedure for processing GPS tracking data and discuss related issues that are uncovered by using real-world GPS tracking data of 316 cars. Thirdly, it investigates the influence of road network density in finding optimal location for enhancing travel efficiency and decreasing travel cost. The results show that the geographical positioning is reliable. Velocity is slightly underestimated, whereas altitude measurements are unreliable.Post processing techniques with auxiliary information is found necessary and important when solving the inaccuracy of GPS data. The densities of the road network influence the finding of optimal locations. The influence will stabilize at a certain level and do not deteriorate when the node density is higher.
Resumo:
O presente trabalho teve como objetivos a análise e atenuação de múltiplas de superfície livre, o processamento e o imageamento de dados sísmicos marinhos visando obter imagens migradas de utilidade à interpretação geológica voltada à exploração de petróleo. Foi dada uma atenção sistemática ao estudo de múltiplas de superfície livre do ponto de vista do filtro de predição baseado na teoria da comunicação com o objetivo de melhor aplicar o filtro WH de deconvolução preditiva, na etapa de processamento, posterior à correção NMO, embora outros métodos possam ser considerados mais competitivos. A identificação e a atenuação de reflexões múltiplas em dados sísmicos reais continua como um grande desafio no processamento de dados sísmicos, uma vez que elas são consideradas como ruído. No entanto, sendo este ruído classificado como coerente, várias técnicas foram desenvolvidas objetivando sua atenuação para evitar erros em cascata nas etapas posteriores como processamento, marcação de eventos, inversão tomográfica, imageamento, e finalmente na interpretação geológica das imagens obtidas. Outro aspecto do trabalho foi estabelecer um fluxograma de processamento-imageamento sendo a atenuação de múltiplas de superfície livre uma etapa central. Foram obtidas seções migradas em tempo e em profundidade onde se permite a interpretação que se deseja. O desenvolvimento deste trabalho foi realizado com os pacotes CWP/SU e MatLab.
Resumo:
Pós-graduação em Ciência da Computação - IBILCE
Resumo:
To understand a city and its urban structure it is necessary to study its history. This is feasible through GIS (Geographical Information Systems) and its by-products on the web. Starting from a cartographic view they allow an initial understanding of, and a comparison between, present and past data together with an easy and intuitive access to database information. The research done led to the creation of a GIS for the city of Bologna. It is based on varied data such as historical map, vector and alphanumeric historical data, etc.. After providing information about GIS we thought of spreading and sharing the collected data on the Web after studying two solutions available on the market: Web Mapping and WebGIS. In this study we discuss the stages, beginning with the development of Historical GIS of Bologna, which led to the making of a WebGIS Open Source (MapServer and Chameleon) and the Web Mapping services (Google Earth, Google Maps and OpenLayers).
Resumo:
Die Molekularbiologie von Menschen ist ein hochkomplexes und vielfältiges Themengebiet, in dem in vielen Bereichen geforscht wird. Der Fokus liegt hier insbesondere auf den Bereichen der Genomik, Proteomik, Transkriptomik und Metabolomik, und Jahre der Forschung haben große Mengen an wertvollen Daten zusammengetragen. Diese Ansammlung wächst stetig und auch für die Zukunft ist keine Stagnation absehbar. Mittlerweile aber hat diese permanente Informationsflut wertvolles Wissen in unüberschaubaren, digitalen Datenbergen begraben und das Sammeln von forschungsspezifischen und zuverlässigen Informationen zu einer großen Herausforderung werden lassen. Die in dieser Dissertation präsentierte Arbeit hat ein umfassendes Kompendium von humanen Geweben für biomedizinische Analysen generiert. Es trägt den Namen medicalgenomics.org und hat diverse biomedizinische Probleme auf der Suche nach spezifischem Wissen in zahlreichen Datenbanken gelöst. Das Kompendium ist das erste seiner Art und sein gewonnenes Wissen wird Wissenschaftlern helfen, einen besseren systematischen Überblick über spezifische Gene oder funktionaler Profile, mit Sicht auf Regulation sowie pathologische und physiologische Bedingungen, zu bekommen. Darüber hinaus ermöglichen verschiedene Abfragemethoden eine effiziente Analyse von signalgebenden Ereignissen, metabolischen Stoffwechselwegen sowie das Studieren der Gene auf der Expressionsebene. Die gesamte Vielfalt dieser Abfrageoptionen ermöglicht den Wissenschaftlern hoch spezialisierte, genetische Straßenkarten zu erstellen, mit deren Hilfe zukünftige Experimente genauer geplant werden können. Infolgedessen können wertvolle Ressourcen und Zeit eingespart werden, bei steigenden Erfolgsaussichten. Des Weiteren kann das umfassende Wissen des Kompendiums genutzt werden, um biomedizinische Hypothesen zu generieren und zu überprüfen.