987 resultados para Experiment data
Resumo:
Reactive transport modelling was used to simulate solute transport, thermodynamic reactions, ion exchange and biodegradation in the Porewater Chemistry (PC) experiment at the Mont Terri Rock Laboratory. Simulations show that the most important chemical processes controlling the fluid composition within the borehole and the surrounding formation during the experiment are ion exchange, biodegradation and dissolution/precipitation reactions involving pyrite and carbonate minerals. In contrast, thermodynamic mineral dissolution/precipitation reactions involving alumo-silicate minerals have little impact on the fluid composition on the time-scale of the experiment. With the accurate description of the initial chemical condition in the formation in combination with kinetic formulations describing the different stages of bacterial activities, it has been possible to reproduce the evolution of important system parameters, such as the pH, redox potential, total organic C. dissolved inorganic C and SO(4) concentration. Leaching of glycerol from the pH-electrode may be the primary source of organic material that initiated bacterial growth, which caused the chemical perturbation in the borehole. Results from these simulations are consistent with data from the over-coring and demonstrate that the Opalinus Clay has a high buffering capacity in terms of chemical perturbations caused by bacterial activity. This buffering capacity can be attributed to the carbonate system as well as to the reactivity of clay surfaces.
Resumo:
Independent component analysis (ICA) or seed based approaches (SBA) in functional magnetic resonance imaging blood oxygenation level dependent (BOLD) data became widely applied tools to identify functionally connected, large scale brain networks. Differences between task conditions as well as specific alterations of the networks in patients as compared to healthy controls were reported. However, BOLD lacks the possibility of quantifying absolute network metabolic activity, which is of particular interest in the case of pathological alterations. In contrast, arterial spin labeling (ASL) techniques allow quantifying absolute cerebral blood flow (CBF) in rest and in task-related conditions. In this study, we explored the ability of identifying networks in ASL data using ICA and to quantify network activity in terms of absolute CBF values. Moreover, we compared the results to SBA and performed a test-retest analysis. Twelve healthy young subjects performed a fingertapping block-design experiment. During the task pseudo-continuous ASL was measured. After CBF quantification the individual datasets were concatenated and subjected to the ICA algorithm. ICA proved capable to identify the somato-motor and the default mode network. Moreover, absolute network CBF within the separate networks during either condition could be quantified. We could demonstrate that using ICA and SBA functional connectivity analysis is feasible and robust in ASL-CBF data. CBF functional connectivity is a novel approach that opens a new strategy to evaluate differences of network activity in terms of absolute network CBF and thus allows quantifying inter-individual differences in the resting state and task-related activations and deactivations.
Resumo:
In this paper, we focus on the model for two types of tumors. Tumor development can be described by four types of death rates and four tumor transition rates. We present a general semi-parametric model to estimate the tumor transition rates based on data from survival/sacrifice experiments. In the model, we make a proportional assumption of tumor transition rates on a common parametric function but no assumption of the death rates from any states. We derived the likelihood function of the data observed in such an experiment, and an EM algorithm that simplified estimating procedures. This article extends work on semi-parametric models for one type of tumor (see Portier and Dinse and Dinse) to two types of tumors.
Resumo:
This article gives an overview over the methods used in the low--level analysis of gene expression data generated using DNA microarrays. This type of experiment allows to determine relative levels of nucleic acid abundance in a set of tissues or cell populations for thousands of transcripts or loci simultaneously. Careful statistical design and analysis are essential to improve the efficiency and reliability of microarray experiments throughout the data acquisition and analysis process. This includes the design of probes, the experimental design, the image analysis of microarray scanned images, the normalization of fluorescence intensities, the assessment of the quality of microarray data and incorporation of quality information in subsequent analyses, the combination of information across arrays and across sets of experiments, the discovery and recognition of patterns in expression at the single gene and multiple gene levels, and the assessment of significance of these findings, considering the fact that there is a lot of noise and thus random features in the data. For all of these components, access to a flexible and efficient statistical computing environment is an essential aspect.
Resumo:
Constructing a 3D surface model from sparse-point data is a nontrivial task. Here, we report an accurate and robust approach for reconstructing a surface model of the proximal femur from sparse-point data and a dense-point distribution model (DPDM). The problem is formulated as a three-stage optimal estimation process. The first stage, affine registration, is to iteratively estimate a scale and a rigid transformation between the mean surface model of the DPDM and the sparse input points. The estimation results of the first stage are used to establish point correspondences for the second stage, statistical instantiation, which stably instantiates a surface model from the DPDM using a statistical approach. This surface model is then fed to the third stage, kernel-based deformation, which further refines the surface model. Handling outliers is achieved by consistently employing the least trimmed squares (LTS) approach with a roughly estimated outlier rate in all three stages. If an optimal value of the outlier rate is preferred, we propose a hypothesis testing procedure to automatically estimate it. We present here our validations using four experiments, which include 1 leave-one-out experiment, 2 experiment on evaluating the present approach for handling pathology, 3 experiment on evaluating the present approach for handling outliers, and 4 experiment on reconstructing surface models of seven dry cadaver femurs using clinically relevant data without noise and with noise added. Our validation results demonstrate the robust performance of the present approach in handling outliers, pathology, and noise. An average 95-percentile error of 1.7-2.3 mm was found when the present approach was used to reconstruct surface models of the cadaver femurs from sparse-point data with noise added.
Resumo:
he physics program of the NA61/SHINE (SHINE = SPS Heavy Ion and Neutrino Experiment) experiment at the CERN SPS consists of three subjects. In the first stage of data taking (2007-2009) measurements of hadron production in hadron-nucleus interactions needed for neutrino (T2K) and cosmic-ray (Pierre Auger and KASCADE) experiments will be performed. In the second stage (2009-2010) hadron production in proton-proton and proton-nucleus interactions needed as reference data for a better understanding of nucleus-nucleus reactions will be studied. In the third stage (2009-2013) energy dependence of hadron production properties will be measured in p+p, p+Pb interactions and nucleus-nucleus collisions, with the aim to identify the properties of the onset of deconfinement and find evidence for the critical point of strongly interacting matter. The NA61 experiment was approved at CERN in June 2007. The first pilot run was performed during October 2007. Calibrations of all detector components have been performed successfully and preliminary uncorrected spectra have been obtained. High quality of track reconstruction and particle identification similar to NA49 has been achieved. The data and new detailed simulations confirm that the NA61 detector acceptance and particle identification capabilities cover the phase space required by the T2K experiment. This document reports on the progress made in the calibration and analysis of the 2007 data.
Resumo:
This tutorial gives a step by step explanation of how one uses experimental data to construct a biologically realistic multicompartmental model. Special emphasis is given on the many ways that this process can be imprecise. The tutorial is intended for both experimentalists who want to get into computer modeling and for computer scientists who use abstract neural network models but are curious about biological realistic modeling. The tutorial is not dependent on the use of a specific simulation engine, but rather covers the kind of data needed for constructing a model, how they are used, and potential pitfalls in the process.
Resumo:
A detailed study is presented of the expected performance of the ATLAS detector. The reconstruction of tracks, leptons, photons, missing energy and jets is investigated, together with the performance of b-tagging and the trigger. The physics potential for a variety of interesting physics processes, within the Standard Model and beyond, is examined. The study comprises a series of notes based on simulations of the detector and physics processes, with particular emphasis given to the data expected from the first years of operation of the LHC at CERN.
Resumo:
Determinants of plant establishment and invasion are a key issue in ecology and evolution. Although establishment success varies substantially among species, the importance of species traits and extrinsic factors as determinants of establishment in existing communities has remained difficult to prove in observational studies because they can be confounded and mask each other. Therefore, we conducted a large multispecies field experiment to disentangle the relative importance of extrinsic factors vs. species characteristics for the establishment success of plants in grasslands. We introduced 48 alien and 45 native plant species at different seed numbers into multiple grassland sites with or without experimental soil disturbance and related their establishment success to species traits assessed in five independent multispecies greenhouse experiments. High propagule pressure and high seed mass were the most important factors increasing establishment success in the very beginning of the experiment. However, after 3 y, propagule pressure became less important, and species traits related to biotic interactions (including herbivore resistance and responses to shading and competition) became the most important drivers of success or failure. The relative importance of different traits was environment-dependent and changed over time. Our approach of combining a multispecies introduction experiment in the field with trait data from independent multispecies experiments in the greenhouse allowed us to detect the relative importance of species traits for early establishment and provided evidence that species traits—fine-tuned by environmental factors—determine success or failure of alien and native plants in temperate grasslands.
Resumo:
We present a program (Ragu; Randomization Graphical User interface) for statistical analyses of multichannel event-related EEG and MEG experiments. Based on measures of scalp field differences including all sensors, and using powerful, assumption-free randomization statistics, the program yields robust, physiologically meaningful conclusions based on the entire, untransformed, and unbiased set of measurements. Ragu accommodates up to two within-subject factors and one between-subject factor with multiple levels each. Significance is computed as function of time and can be controlled for type II errors with overall analyses. Results are displayed in an intuitive visual interface that allows further exploration of the findings. A sample analysis of an ERP experiment illustrates the different possibilities offered by Ragu. The aim of Ragu is to maximize statistical power while minimizing the need for a-priori choices of models and parameters (like inverse models or sensors of interest) that interact with and bias statistics.
Resumo:
Dynamic changes in ERP topographies can be conveniently analyzed by means of microstates, the so-called "atoms of thoughts", that represent brief periods of quasi-stable synchronized network activation. Comparing temporal microstate features such as on- and offset or duration between groups and conditions therefore allows a precise assessment of the timing of cognitive processes. So far, this has been achieved by assigning the individual time-varying ERP maps to spatially defined microstate templates obtained from clustering the grand mean data into predetermined numbers of topographies (microstate prototypes). Features obtained from these individual assignments were then statistically compared. This has the problem that the individual noise dilutes the match between individual topographies and templates leading to lower statistical power. We therefore propose a randomization-based procedure that works without assigning grand-mean microstate prototypes to individual data. In addition, we propose a new criterion to select the optimal number of microstate prototypes based on cross-validation across subjects. After a formal introduction, the method is applied to a sample data set of an N400 experiment and to simulated data with varying signal-to-noise ratios, and the results are compared to existing methods. In a first comparison with previously employed statistical procedures, the new method showed an increased robustness to noise, and a higher sensitivity for more subtle effects of microstate timing. We conclude that the proposed method is well-suited for the assessment of timing differences in cognitive processes. The increased statistical power allows identifying more subtle effects, which is particularly important in small and scarce patient populations.
Resumo:
1. Biodiversity-ecosystem functioning (BEF) experiments address ecosystem-level consequences of species loss by comparing communities of high species richness with communities from which species have been gradually eliminated. BEF experiments originally started with microcosms in the laboratory and with grassland ecosystems. A new frontier in experimental BEF research is manipulating tree diversity in forest ecosystems, compelling researchers to think big and comprehensively. 2. We present and discuss some of the major issues to be considered in the design of BEF experiments with trees and illustrate these with a new forest biodiversity experiment established in subtropical China (Xingangshan, Jiangxi Province) in 2009/2010. Using a pool of 40 tree species, extinction scenarios were simulated with tree richness levels of 1, 2, 4, 8 and 16 species on a total of 566 plots of 25.8x25.8m each. 3. The goal of this experiment is to estimate effects of tree and shrub species richness on carbon storage and soil erosion; therefore, the experiment was established on sloped terrain. The following important design choices were made: (i) establishing many small rather than fewer larger plots, (ii) using high planting density and random mixing of species rather than lower planting density and patchwise mixing of species, (iii) establishing a map of the initial ecoscape' to characterize site heterogeneity before the onset of biodiversity effects and (iv) manipulating tree species richness not only in random but also in trait-oriented extinction scenarios. 4. Data management and analysis are particularly challenging in BEF experiments with their hierarchical designs nesting individuals within-species populations within plots within-species compositions. Statistical analysis best proceeds by partitioning these random terms into fixed-term contrasts, for example, species composition into contrasts for species richness and the presence of particular functional groups, which can then be tested against the remaining random variation among compositions. 5. We conclude that forest BEF experiments provide exciting and timely research options. They especially require careful thinking to allow multiple disciplines to measure and analyse data jointly and effectively. Achieving specific research goals and synergy with previous experiments involves trade-offs between different designs and requires manifold design decisions.
Resumo:
The accuracy of Global Positioning System (GPS) time series is degraded by the presence of offsets. To assess the effectiveness of methods that detect and remove these offsets, we designed and managed the Detection of Offsets in GPS Experiment. We simulated time series that mimicked realistic GPS data consisting of a velocity component, offsets, white and flicker noises (1/f spectrum noises) composed in an additive model. The data set was made available to the GPS analysis community without revealing the offsets, and several groups conducted blind tests with a range of detection approaches. The results show that, at present, manual methods (where offsets are hand picked) almost always give better results than automated or semi‒automated methods (two automated methods give quite similar velocity bias as the best manual solutions). For instance, the fifth percentile range (5% to 95%) in velocity bias for automated approaches is equal to 4.2 mm/year (most commonly ±0.4 mm/yr from the truth), whereas it is equal to 1.8 mm/yr for the manual solutions (most commonly 0.2 mm/yr from the truth). The magnitude of offsets detectable by manual solutions is smaller than for automated solutions, with the smallest detectable offset for the best manual and automatic solutions equal to 5 mm and 8 mm, respectively. Assuming the simulated time series noise levels are representative of real GPS time series, robust geophysical interpretation of individual site velocities lower than 0.2–0.4 mm/yr is therefore certainly not robust, although a limit of nearer 1 mm/yr would be a more conservative choice. Further work to improve offset detection in GPS coordinates time series is required before we can routinely interpret sub‒mm/yr velocities for single GPS stations.
Resumo:
In situ diffusion experiments are performed in geological formations at underground research laboratories to overcome the limitations of laboratory diffusion experiments and investigate scale effects. Tracer concentrations are monitored at the injection interval during the experiment (dilution data) and measured from host rock samples around the injection interval at the end of the experiment (overcoring data). Diffusion and sorption parameters are derived from the inverse numerical modeling of the measured tracer data. The identifiability and the uncertainties of tritium and Na-22(+) diffusion and sorption parameters are studied here by synthetic experiments having the same characteristics as the in situ diffusion and retention (DR) experiment performed on Opalinus Clay. Contrary to previous identifiability analyses of in situ diffusion experiments, which used either dilution or overcoring data at approximate locations, our analysis of the parameter identifiability relies simultaneously on dilution and overcoring data, accounts for the actual position of the overcoring samples in the claystone, uses realistic values of the standard deviation of the measurement errors, relies on model identification criteria to select the most appropriate hypothesis about the existence of a borehole disturbed zone and addresses the effect of errors in the location of the sampling profiles. The simultaneous use of dilution and overcoring data provides accurate parameter estimates in the presence of measurement errors, allows the identification of the right hypothesis about the borehole disturbed zone and diminishes other model uncertainties such as those caused by errors in the volume of the circulation system and the effective diffusion coefficient of the filter. The proper interpretation of the experiment requires the right hypothesis about the borehole disturbed zone. A wrong assumption leads to large estimation errors. The use of model identification criteria helps in the selection of the best model. Small errors in the depth of the overcoring samples lead to large parameter estimation errors. Therefore, attention should be paid to minimize the errors in positioning the depth of the samples. The results of the identifiability analysis do not depend on the particular realization of random numbers. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
The migration of radioactive and chemical contaminants in clay materials and argillaceous host rocks is characterised by diffusion and retention processes. Valuable information on such processes can be gained by combining diffusion studies at laboratory scale with field migration tests. In this work, the outcome of a multi-tracer in situ migration test performed in the Opalinus Clay formation in the Mont Terri underground rock laboratory (Switzerland) is presented. Thus, 1.16 x 10(5) Bq/L of HTO, 3.96 x 10(3) Bq/L of Sr-85, 6.29 x 10(2) Bq/L of Co-60, 2.01 x 10(-3) mol/L Cs, 9.10 x 10(-4) mol/L I and 1.04 x 10(-3) mol/L Br were injected into the borehole. The decrease of the radioisotope concentrations in the borehole was monitored using in situ gamma-spectrometry. The other tracers were analyzed with state-of-the-art laboratory procedures after sampling of small water aliquots from the reservoir. The diffusion experiment was carried out over a period of one year after which the interval section was overcored and analyzed. Based on the experimental data from the tracer evolution in the borehole and the tracer profiles in the rock, the diffusion of tracers was modelled with the numerical code CRUNCH. The results obtained for HTO (H-3), I- and Br- confirm previous lab and in situ diffusion data. Anionic fluxes into the formation were smaller compared to HTO because of anion exclusion effects. The migration of the cations Sr-85(2+), Cs+ and Co-60(2+) was found to be governed by both diffusion and sorption processes. For Sr-85(2+), the slightly higher diffusivity relative to HTO and the low sorption value are consistent with laboratory diffusion measurements on small-scale samples. In the case of Cs+, the numerically deduced high diffusivity and the Freundlich-type sorption behaviour is also supported by ongoing laboratory data. For Co, no laboratory diffusion data were yet available for comparison; however, the modelled data suggests that Co-60(2+) sorption was weaker than would be expected from available batch sorption data. Overall, the results demonstrate the feasibility of the experimental setup for obtaining high-quality diffusion data for conservative and sorbing tracers. (C) 2007 Elsevier Ltd. All rights reserved.