29 resultados para open source seismic data processing packages
Resumo:
Cross-hole anisotropic electrical and seismic tomograms of fractured metamorphic rock have been obtained at a test site where extensive hydrological data were available. A strong correlation between electrical resistivity anisotropy and seismic compressional-wave velocity anisotropy has been observed. Analysis of core samples from the site reveal that the shale-rich rocks have fabric-related average velocity anisotropy of between 10% and 30%. The cross-hole seismic data are consistent with these values, indicating that observed anisotropy might be principally due to the inherent rock fabric rather than to the aligned sets of open fractures. One region with velocity anisotropy greater than 30% has been modelled as aligned open fractures within an anisotropic rock matrix and this model is consistent with available fracture density and hydraulic transmissivity data from the boreholes and the cross-hole resistivity tomography data. However, in general the study highlights the uncertainties that can arise, due to the relative influence of rock fabric and fluid-filled fractures, when using geophysical techniques for hydrological investigations.
Resumo:
This article analyses the results of an empirical study on the 200 most popular UK-based websites in various sectors of e-commerce services. The study provides empirical evidence on unlawful processing of personal data. It comprises a survey on the methods used to seek and obtain consent to process personal data for direct marketing and advertisement, and a test on the frequency of unsolicited commercial emails (UCE) received by customers as a consequence of their registration and submission of personal information to a website. Part One of the article presents a conceptual and normative account of data protection, with a discussion of the ethical values on which EU data protection law is grounded and an outline of the elements that must be in place to seek and obtain valid consent to process personal data. Part Two discusses the outcomes of the empirical study, which unveils a significant departure between EU legal theory and practice in data protection. Although a wide majority of the websites in the sample (69%) has in place a system to ask separate consent for engaging in marketing activities, it is only 16.2% of them that obtain a consent which is valid under the standards set by EU law. The test with UCE shows that only one out of three websites (30.5%) respects the will of the data subject not to receive commercial communications. It also shows that, when submitting personal data in online transactions, there is a high probability (50%) of incurring in a website that will ignore the refusal of consent and will send UCE. The article concludes that there is severe lack of compliance of UK online service providers with essential requirements of data protection law. In this respect, it suggests that there is inappropriate standard of implementation, information and supervision by the UK authorities, especially in light of the clarifications provided at EU level.
Resumo:
Background: Expression microarrays are increasingly used to obtain large scale transcriptomic information on a wide range of biological samples. Nevertheless, there is still much debate on the best ways to process data, to design experiments and analyse the output. Furthermore, many of the more sophisticated mathematical approaches to data analysis in the literature remain inaccessible to much of the biological research community. In this study we examine ways of extracting and analysing a large data set obtained using the Agilent long oligonucleotide transcriptomics platform, applied to a set of human macrophage and dendritic cell samples. Results: We describe and validate a series of data extraction, transformation and normalisation steps which are implemented via a new R function. Analysis of replicate normalised reference data demonstrate that intrarray variability is small (only around 2 of the mean log signal), while interarray variability from replicate array measurements has a standard deviation (SD) of around 0.5 log(2) units (6 of mean). The common practise of working with ratios of Cy5/Cy3 signal offers little further improvement in terms of reducing error. Comparison to expression data obtained using Arabidopsis samples demonstrates that the large number of genes in each sample showing a low level of transcription reflect the real complexity of the cellular transcriptome. Multidimensional scaling is used to show that the processed data identifies an underlying structure which reflect some of the key biological variables which define the data set. This structure is robust, allowing reliable comparison of samples collected over a number of years and collected by a variety of operators. Conclusions: This study outlines a robust and easily implemented pipeline for extracting, transforming normalising and visualising transcriptomic array data from Agilent expression platform. The analysis is used to obtain quantitative estimates of the SD arising from experimental (non biological) intra- and interarray variability, and for a lower threshold for determining whether an individual gene is expressed. The study provides a reliable basis for further more extensive studies of the systems biology of eukaryotic cells.
Resumo:
The main activity carried out by the geophysicist when interpreting seismic data, in terms of both importance and time spent is tracking (or picking) seismic events. in practice, this activity turns out to be rather challenging, particularly when the targeted event is interrupted by discontinuities such as geological faults or exhibits lateral changes in seismic character. In recent years, several automated schemes, known as auto-trackers, have been developed to assist the interpreter in this tedious and time-consuming task. The automatic tracking tool available in modem interpretation software packages often employs artificial neural networks (ANN's) to identify seismic picks belonging to target events through a pattern recognition process. The ability of ANNs to track horizons across discontinuities largely depends on how reliably data patterns characterise these horizons. While seismic attributes are commonly used to characterise amplitude peaks forming a seismic horizon, some researchers in the field claim that inherent seismic information is lost in the attribute extraction process and advocate instead the use of raw data (amplitude samples). This paper investigates the performance of ANNs using either characterisation methods, and demonstrates how the complementarity of both seismic attributes and raw data can be exploited in conjunction with other geological information in a fuzzy inference system (FIS) to achieve an enhanced auto-tracking performance.
Resumo:
Measurements of the ionospheric E-region during total solar eclipses have been used to provide information about the evolution of the solar magnetic field and EUV and X-ray emissions from the solar corona and chromosphere. By measuring levels of ionisation during an eclipse and comparing these measurements with an estimate of the unperturbed ionisation levels (such as those made during a control day, where available) it is possible to estimate the percentage of ionising radiation being emitted by the solar corona and chromosphere. Previously unpublished data from the two eclipses presented here are particularly valuable as they provide information that supplements the data published to date. The eclipse of 23 October 1976 over Australia provides information in a data gap that would otherwise have spanned the years 1966 to 1991. The eclipse of 4 December 2002 over Southern Africa is important as it extends the published sequence of measurements. Comparing measurements from eclipses between 1932 and 2002 with the solar magnetic source flux reveals that changes in the solar EUV and X-ray flux lag the open source flux measurements by approximately 1.5 years. We suggest that this unexpected result comes about from changes to the relative size of the limb corona between eclipses, with the lag representing the time taken to populate the coronal field with plasma hot enough to emit the EUV and X-rays ionising our atmosphere.
Resumo:
GODIVA2 is a dynamic website that provides visual access to several terabytes of physically distributed, four-dimensional environmental data. It allows users to explore large datasets interactively without the need to install new software or download and understand complex data. Through the use of open international standards, GODIVA2 maintains a high level of interoperability with third-party systems, allowing diverse datasets to be mutually compared. Scientists can use the system to search for features in large datasets and to diagnose the output from numerical simulations and data processing algorithms. Data providers around Europe have adopted GODIVA2 as an INSPIRE-compliant dynamic quick-view system for providing visual access to their data.
Resumo:
Hydroponic isotope labelling of entire plants (HILEP) is a cost-effective method enabling metabolic labelling of whole and mature plants with a stable isotope such as N-15. By utilising hydroponic media that contain N-15 inorganic salts as the sole nitrogen source, near to 100% N-15-labelling of proteins can be achieved. In this study, it is shown that HILEP, in combination with mass spectrometry, is suitable for relative protein quantitation of seven week-old Arabidopsis plants submitted to oxidative stress. Protein extracts from pooled N-14- and N-15-hydroponically grown plants were fractionated by SDS-PAGE, digested and analysed by liquid chromatography electrospray ionisation tandem mass spectrometry (LC-ESI-MS/MS). Proteins were identified and the spectra of N-14/N-15 peptide pairs were extracted using their m/z chromatographic retention time, isotopic distributions, and the m/z difference between the N-14 and N-15 peptides. Relative amounts were calculated as the ratio of the sum of the peak areas of the two distinct N-14 and N-15 peptide isotope envelopes. Using Mascot and the open source trans-proteomic pipeline (TPP), the data processing was automated for global proteome quantitation down to the isoform level by extracting isoform specific peptides. With this combination of metabolic labelling and mass spectrometry it was possible to show differential protein expression in the apoplast of plants submitted to oxidative stress. Moreover, it was possible to discriminate between differentially expressed isoforms belonging to the same protein family, such as isoforms of xylanases and pathogen-related glucanases (PR 2). (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
The ability to display and inspect powder diffraction data quickly and efficiently is a central part of the data analysis process. Whilst many computer programs are capable of displaying powder data, their focus is typically on advanced operations such as structure solution or Rietveld refinement. This article describes a lightweight software package, Jpowder, whose focus is fast and convenient visualization and comparison of powder data sets in a variety of formats from computers with network access. Jpowder is written in Java and uses its associated Web Start technology to allow ‘single-click deployment’ from a web page, http://www.jpowder.org. Jpowder is open source, free and available for use by anyone.
Resumo:
The ability to create accurate geometric models of neuronal morphology is important for understanding the role of shape in information processing. Despite a significant amount of research on automating neuron reconstructions from image stacks obtained via microscopy, in practice most data are still collected manually. This paper describes Neuromantic, an open source system for three dimensional digital tracing of neurites. Neuromantic reconstructions are comparable in quality to those of existing commercial and freeware systems while balancing speed and accuracy of manual reconstruction. The combination of semi-automatic tracing, intuitive editing, and ability of visualizing large image stacks on standard computing platforms provides a versatile tool that can help address the reconstructions availability bottleneck. Practical considerations for reducing the computational time and space requirements of the extended algorithm are also discussed.
Resumo:
The CHARMe project enables the annotation of climate data with key pieces of supporting information that we term “commentary”. Commentary reflects the experience that has built up in the user community, and can help new or less-expert users (such as consultants, SMEs, experts in other fields) to understand and interpret complex data. In the context of global climate services, the CHARMe system will record, retain and disseminate this commentary on climate datasets, and provide a means for feeding back this experience to the data providers. Based on novel linked data techniques and standards, the project has developed a core system, data model and suite of open-source tools to enable this information to be shared, discovered and exploited by the community.
Resumo:
Environment monitoring applications using Wireless Sensor Networks (WSNs) have had a lot of attention in recent years. In much of this research tasks like sensor data processing, environment states and events decision making and emergency message sending are done by a remote server. A proposed cross layer protocol for two different applications where, reliability for delivered data, delay and life time of the network need to be considered, has been simulated and the results are presented in this paper. A WSN designed for the proposed applications needs efficient MAC and routing protocols to provide a guarantee for the reliability of the data delivered from source nodes to the sink. A cross layer based on the design given in [1] has been extended and simulated for the proposed applications, with new features, such as routes discovery algorithms added. Simulation results show that the proposed cross layer based protocol can conserve energy for nodes and provide the required performance such as life time of the network, delay and reliability.
Resumo:
For users of climate services, the ability to quickly determine the datasets that best fit one's needs would be invaluable. The volume, variety and complexity of climate data makes this judgment difficult. The ambition of CHARMe ("Characterization of metadata to enable high-quality climate services") is to give a wider interdisciplinary community access to a range of supporting information, such as journal articles, technical reports or feedback on previous applications of the data. The capture and discovery of this "commentary" information, often created by data users rather than data providers, and currently not linked to the data themselves, has not been significantly addressed previously. CHARMe applies the principles of Linked Data and open web standards to associate, record, search and publish user-derived annotations in a way that can be read both by users and automated systems. Tools have been developed within the CHARMe project that enable annotation capability for data delivery systems already in wide use for discovering climate data. In addition, the project has developed advanced tools for exploring data and commentary in innovative ways, including an interactive data explorer and comparator ("CHARMe Maps") and a tool for correlating climate time series with external "significant events" (e.g. instrument failures or large volcanic eruptions) that affect the data quality. Although the project focuses on climate science, the concepts are general and could be applied to other fields. All CHARMe system software is open-source, released under a liberal licence, permitting future projects to re-use the source code as they wish.
Resumo:
Geospatial information of many kinds, from topographic maps to scientific data, is increasingly being made available through web mapping services. These allow georeferenced map images to be served from data stores and displayed in websites and geographic information systems, where they can be integrated with other geographic information. The Open Geospatial Consortium’s Web Map Service (WMS) standard has been widely adopted in diverse communities for sharing data in this way. However, current services typically provide little or no information about the quality or accuracy of the data they serve. In this paper we will describe the design and implementation of a new “quality-enabled” profile of WMS, which we call “WMS-Q”. This describes how information about data quality can be transmitted to the user through WMS. Such information can exist at many levels, from entire datasets to individual measurements, and includes the many different ways in which data uncertainty can be expressed. We also describe proposed extensions to the Symbology Encoding specification, which include provision for visualizing uncertainty in raster data in a number of different ways, including contours, shading and bivariate colour maps. We shall also describe new open-source implementations of the new specifications, which include both clients and servers.
Resumo:
This paper presents an open-source canopy height profile (CHP) toolkit designed for processing small-footprint full-waveform LiDAR data to obtain the estimates of effective leaf area index (LAIe) and CHPs. The use of the toolkit is presented with a case study of LAIe estimation in discontinuous-canopy fruit plantations. The experiments are carried out in two study areas, namely, orange and almond plantations, with different percentages of canopy cover (48% and 40%, respectively). For comparison, two commonly used discrete-point LAIe estimation methods are also tested. The LiDAR LAIe values are first computed for each of the sites and each method as a whole, providing “apparent” site-level LAIe, which disregards the discontinuity of the plantations’ canopies. Since the toolkit allows for the calculation of the study area LAIe at different spatial scales, between-tree-level clumpingcan be easily accounted for and is then used to illustrate the impact of the discontinuity of canopy cover on LAIe retrieval. The LiDAR LAIe estimates are therefore computed at smaller scales as a mean of LAIe in various grid-cell sizes, providing estimates of “actual” site-level LAIe. Subsequently, the LiDAR LAIe results are compared with theoretical models of “apparent” LAIe versus “actual” LAIe, based on known percent canopy cover in each site. The comparison of those models to LiDAR LAIe derived from the smallest grid-cell sizes against the estimates of LAIe for the whole site has shown that the LAIe estimates obtained from the CHP toolkit provided values that are closest to those of theoretical models.
Resumo:
Flood extent maps derived from SAR images are a useful source of data for validating hydraulic models of river flood flow. The accuracy of such maps is reduced by a number of factors, including changes in returns from the water surface caused by different meteorological conditions and the presence of emergent vegetation. The paper describes how improved accuracy can be achieved by modifying an existing flood extent delineation algorithm to use airborne laser altimetry (LiDAR) as well as SAR data. The LiDAR data provide an additional constraint that waterline (land-water boundary) heights should vary smoothly along the flooded reach. The method was tested on a SAR image of a flood for which contemporaneous aerial photography existed, together with LiDAR data of the un-flooded reach. Waterline heights of the SAR flood extent conditioned on both SAR and LiDAR data matched the corresponding heights from the aerial photo waterline significantly more closely than those from the SAR flood extent conditioned only on SAR data.