952 resultados para Scientific Data Visualisation
Resumo:
The Tara Oceans Expedition (2009-2013) sampled the world oceans on board a 36 m long schooner, collecting environmental data and organisms from viruses to planktonic metazoans for later analyses using modern sequencing and state-of-the-art imaging technologies. Tara Oceans Data are particularly suited to study the genetic, morphological and functional diversity of plankton. The present data set provides environmental context to all samples from the Tara Oceans Expedition (2009-2013), including calculated averages of mesaurements made concurrently at the sampling location and depth, and calculated averages from climatologies (AMODIS, VGPM) and satellite products.
Resumo:
The Tara Oceans Expedition (2009-2013) sampled the world oceans on board a 36 m long schooner, collecting environmental data and organisms from viruses to planktonic metazoans for later analyses using modern sequencing and state-of-the-art imaging technologies. Tara Oceans Data are particularly suited to study the genetic, morphological and functional diversity of plankton. The present data set provides environmental context to all samples from the Tara Oceans Expedition (2009-2013), about mesoscale features related to the sampling date, time and location. Includes calculated averages of mesaurements made concurrently at the sampling location and depth, and calculated averages from climatologies (AMODIS, VGPM) and satellite products.
Resumo:
Is Benford's law a good instrument to detect fraud in reports of statistical and scientific data? For a valid test the probability of "false positives" and "false negatives" has to be low. However, it is very doubtful whether the Benford distribution is an appropriate tool to discriminate between manipulated and non-manipulated estimates. Further research should focus more on the validity of the test and test results should be interpreted more carefully.
Resumo:
Spatial Data Infrastructures have become a methodological and technological benchmark enabling distributed access to historical-cartographic archives. However, it is essential to offer enhanced virtual tools that imitate the current processes and methodologies that are carried out by librarians, historians and academics in the existing map libraries around the world. These virtual processes must be supported by a generic framework for managing, querying, and accessing distributed georeferenced resources and other content types such as scientific data or information. The authors have designed and developed support tools to provide enriched browsing, measurement and geometrical analysis capabilities, and dynamical querying methods, based on SDI foundations. The DIGMAP engine and the IBERCARTO collection enable access to georeferenced historical-cartographical archives. Based on lessons learned from the CartoVIRTUAL and DynCoopNet projects, a generic service architecture scheme is proposed. This way, it is possible to achieve the integration of virtual map rooms and SDI technologies bringing support to researchers within the historical and social domains.
Resumo:
O trabalho analisa a expedição astronômica realizada pela Marinha norteamericana ao Chile, durante os anos de 1849 a 1852, comandada pelo oficial e também astrônomo James Melville Gilliss. O objetivo foi compreender os interesses científicos, políticos, geopolíticos e comerciais que motivaram a viagem, bem como as imagens e representações sobre a América do Sul, especialmente do Panamá, Peru, Chile e Argentina, construídas e divulgadas através do relatório oficial da expedição, com o título \"The U.S. Naval Astronomical Expedition to the Southern Hemisphere during the years (1849- 1852). Esta pesquisa também procura examinar os diferentes dispositivos discursivos utilizados pelos oficiais que escreveram o relatório, James Gilliss e Archibald MacRae, discutindo dissensões e diferentes visões sobre o modo de veicular dados científicos, e também modos distintos de relatar a América do Sul.
Resumo:
The spatial data set delineates areas with similar environmental properties regarding soil, terrain morphology, climate and affiliation to the same administrative unit (NUTS3 or comparable units in size) at a minimum pixel size of 1km2. The scope of developing this data set is to provide a link between spatial environmental information (e.g. soil properties) and statistical data (e.g. crop distribution) available at administrative level. Impact assessment of agricultural management on emissions of pollutants or radiative active gases, or analysis regarding the influence of agricultural management on the supply of ecosystem services, require the proper spatial coincidence of the driving factors. The HSU data set provides e.g. the link between the agro-economic model CAPRI and biophysical assessment of environmental impacts (updating previously spatial units, Leip et al. 2008), for the analysis of policy scenarios. Recently, a statistical model to disaggregate crop information available from regional statistics to the HSU has been developed (Lamboni et al. 2016). The HSU data set consists of the spatial layers provided in vector and raster format as well as attribute tables with information on the properties of the HSU. All input data for the delineation the HSU is publicly available. For some parameters the attribute tables provide the link between the HSU data set and e.g. the soil map(s) rather than the data itself. The HSU data set is closely linked the USCIE data set.
Resumo:
The present data publication provides permanent links to original and updated versions of validated data files. The data files include properties of seawater, particulate matter and dissolved matter that were measured from discrete water samples collected with Niskin bottles during the 2009-2013 Tara Oceans expedition. Properties include pigment concentrations from HPLC analysis (10 depths per vertical profile, 25 pigments per depth), the carbonate system (Surface and 400m; pH (total scale), CO2, pCO2, fCO2, HCO3, CO3, Total alkalinity, Total carbon, OmegaAragonite, OmegaCalcite, and dosage Flags), nutrients (10 depths per vertical profile; NO2, PO4, N02/NO3, SI, quality Flags), DOC, CDOM, and dissolved oxygen isotopes. The Service National d'Analyse des Paramètres Océaniques du CO2, at the Université Pierre et Marie Curie, determined CT and AT potentiometrically (Edmond 1970; DOE 1994) on samples preserved according to Dickson et al. (2007). More than 250 vertical profiles of these properties were made across the world ocean. DOC, CDOM and dissolved oxygen isotopes are available only for the Arctic Ocean and Arctic Seas (2013).
Resumo:
The present data publication provides permanent links to original and updated versions of validated data files. The data files include properties of seawater, particulate matter and dissolved matter from physical, optical and imaging sensors mounted on a vertical sampling system (Rosette) used during the 2009-2013 tara Oceans Expedition. It comprised 2 pairs of conductivity and temperature sensors (SEABIRD components), and a complete set of WEtLabs optical sensors, including chrorophyll and CDOM fluorometers, a 25 cm transmissiometer, and a one-wavelength backscatter meter. In addition, a SATLANTIC ISUS nitrate sensor and a Hydroptic Underwater Vision Profiler (UVP) were mounted on the rosette. In the Arctic Ocean and Arctic Seas (2013), a second oxygen sensor (SBE43) and a four frequency Aquascat acoustic profiler were added. The system was powered on specific Li-Ion batteries and data were self-recorded at 24HZ. Sensors have all been factory calibrated before, during and after the four year program. Oxygen was validated using climatologies (WOA09). Nitrate and Fluorescence data were adjusted with discrete measurements from Niskin bottles mounted on the Rosette, and optical darks were performed monthly on board. A total of 839 quality checked vertical profiles were made during the tara Oceans expedition 2009-2013.
Resumo:
Background: A new immunoassay for free light chain measurements has been reported to be useful for the diagnosis and monitoring of monoclonal light chain diseases and nonsecretory myeloma. We describe experience with and some potential pitfalls of the assay. Methods: The assay was assessed for precision, sample type and stability, recovery, and harmonization of results between two analyzers on which the reagents are used. Free-light-chain concentrations were measured in healthy individuals (to determine biological variation), patients with monoclonal gammopathy of undetermined significance, myeloma patients after autologous stem cell transplants, and patients with renal disease. Results: Analytical imprecision (CV) was 6-11% for kappa and A free-light-chain measurement and 16% for the calculated kappa/lambda ratio. Biological variation was generally insignificant compared with analytical variation. Despite the same reagent source, values were not completely harmonized between assay systems and may produce discordant free-light-chain ratios. In some patients with clinically stable myeloma, or post transplantation, or with monoclonal gammopathy of undetermined significance, free-light-chain concentration and ratio were within the population reference interval despite the presence of monoclonal intact immunoglobulin in serum. In other patients with monoclonal gammopathy of undetermined significance, values were abnormal although there was no clinical evidence of progression to multiple myeloma. Conclusions: The use of free-light-chain measurements alone cannot differentiate some groups of patients with monoclonal gammopathy from healthy individuals. As with the introduction of any new test, it is essential that more scientific data about use of this assay in different subject groups are available so that results can be interpreted with clinical certainty. (C) 2003 American Association for Clinical Chemistry.
Resumo:
Purpose: Although manufacturers of bicycle power monitoring devices SRM and Power Tap (PT) claim accuracy to within 2.5%, there are limited scientific data available in support. The purpose of this investigation was to assess the accuracy of SRM and PT under different conditions. Methods: First, 19 SRM were calibrated, raced for 11 months, and retested using a dynamic CALRIG (50-1000 W at 100 rpm). Second, using the same procedure, five PT were repeat tested on alternate days. Third, the most accurate SRM and PT were tested for the influence of cadence (60, 80, 100, 120 rpm), temperature (8 and 21degreesC) and time (1 h at similar to300 W) on accuracy. Finally, the same SRM and PT were downloaded and compared after random cadence and gear surges using the CALRIG and on a training ride. Results: The mean error scores for SRM and PT factory calibration over a range of 50-1000 W were 2.3 +/- 4.9% and -2.5 +/- 0.5%, respectively. A second set of trials provided stable results for 15 calibrated SRM after 11 months (-0.8 +/- 1.7%), and follow-up testing of all PT units confirmed these findings (-2.7 +/- 0.1%). Accuracy for SRM and PT was not largely influenced by time and cadence; however. power output readings were noticeably influenced by temperature (5.2% for SRM and 8.4% for PT). During field trials, SRM average and max power were 4.8% and 7.3% lower, respectively, compared with PT. Conclusions: When operated according to manufacturers instructions, both SRM and PT offer the coach, athlete, and sport scientist the ability to accurately monitor power output in the lab and the field. Calibration procedures matching performance tests (duration, power, cadence, and temperature) are, however, advised as the error associated with each unit may vary.
Resumo:
Occupational standards concerning allowable concentrations of chemical compounds in the ambient air of workplaces have been established in several countries worldwide. With the integration of the European Union (EU), there has been a need of establishing harmonised Occupational Exposure Limits (OEL). The European Commission Directive 95/320/EC of 12 July 1995 has given the tasks to a Scientific Committee for Occupational Exposure Limits (SCOEL) to propose, based on scientific data and where appropriate, occupational limit values which may include the 8-h time-weighted average (TWA), short-term limits/excursion limits (STEL) and Biological Limit Values (BLVs). In 2000, the European Union issued a list of 62 chemical substances with Occupational Exposure Limits. Of these, 25 substances received a skin notation, indicating that toxicologically significant amounts may be taken up via the skin. For such substances, monitoring of concentrations in ambient air may not be sufficient, and biological monitoring strategies appear of potential importance in the medical surveillance of exposed workers. Recent progress has been made with respect to formulation of a strategy related to health-based BLVs. (c) 2005 Elsevier Ireland Ltd. All rights reserved.
Resumo:
Terrain can be approximated by a triangular mesh consisting millions of 3D points. Multiresolution triangular mesh (MTM) structures are designed to support applications that use terrain data at variable levels of detail (LOD). Typically, an MTM adopts a tree structure where a parent node represents a lower-resolution approximation of its descendants. Given a region of interest (ROI) and a LOD, the process of retrieving the required terrain data from the database is to traverse the MTM tree from the root to reach all the nodes satisfying the ROI and LOD conditions. This process, while being commonly used for multiresolution terrain visualization, is inefficient as either a large number of sequential I/O operations or fetching a large amount of extraneous data is incurred. Various spatial indexes have been proposed in the past to address this problem, however level-by-level tree traversal remains a common practice in order to obtain topological information among the retrieved terrain data. A new MTM data structure called direct mesh is proposed. We demonstrate that with direct mesh the amount of data retrieval can be substantially reduced. Comparing with existing MTM indexing methods, a significant performance improvement has been observed for real-life terrain data.
Resumo:
Email is an important form of asynchronous communication. Visualizing analyses of email communication patterns during a collaborative activity help us better understand the nature of collaboration, and identify the key players. By analysing the contents of email communication and adding reflective comments on its perceived importance from the participants of a collaboration new information can be gleaned not immediately obvious in its original flat form. This paper outlines a proof-of-concept prototype collaborative email visualisation schema. Data from a collaboration case study is analysed and subsequently employed to construct a display of the relative impact of both key players and the types of email used.
Resumo:
This thesis introduces a flexible visual data exploration framework which combines advanced projection algorithms from the machine learning domain with visual representation techniques developed in the information visualisation domain to help a user to explore and understand effectively large multi-dimensional datasets. The advantage of such a framework to other techniques currently available to the domain experts is that the user is directly involved in the data mining process and advanced machine learning algorithms are employed for better projection. A hierarchical visualisation model guided by a domain expert allows them to obtain an informed segmentation of the input space. Two other components of this thesis exploit properties of these principled probabilistic projection algorithms to develop a guided mixture of local experts algorithm which provides robust prediction and a model to estimate feature saliency simultaneously with the training of a projection algorithm.Local models are useful since a single global model cannot capture the full variability of a heterogeneous data space such as the chemical space. Probabilistic hierarchical visualisation techniques provide an effective soft segmentation of an input space by a visualisation hierarchy whose leaf nodes represent different regions of the input space. We use this soft segmentation to develop a guided mixture of local experts (GME) algorithm which is appropriate for the heterogeneous datasets found in chemoinformatics problems. Moreover, in this approach the domain experts are more involved in the model development process which is suitable for an intuition and domain knowledge driven task such as drug discovery. We also derive a generative topographic mapping (GTM) based data visualisation approach which estimates feature saliency simultaneously with the training of a visualisation model.
Resumo:
This thesis is a study of low-dimensional visualisation methods for data visualisation under certainty of the input data. It focuses on the two main feed-forward neural network algorithms which are NeuroScale and Generative Topographic Mapping (GTM) by trying to make both algorithms able to accommodate the uncertainty. The two models are shown not to work well under high levels of noise within the data and need to be modified. The modification of both models, NeuroScale and GTM, are verified by using synthetic data to show their ability to accommodate the noise. The thesis is interested in the controversy surrounding the non-uniqueness of predictive gene lists (PGL) of predicting prognosis outcome of breast cancer patients as available in DNA microarray experiments. Many of these studies have ignored the uncertainty issue resulting in random correlations of sparse model selection in high dimensional spaces. The visualisation techniques are used to confirm that the patients involved in such medical studies are intrinsically unclassifiable on the basis of provided PGL evidence. This additional category of ‘unclassifiable’ should be accommodated within medical decision support systems if serious errors and unnecessary adjuvant therapy are to be avoided.