41 resultados para Which-way experiments
Resumo:
1. Biodiversity-ecosystem functioning (BEF) experiments address ecosystem-level consequences of species loss by comparing communities of high species richness with communities from which species have been gradually eliminated. BEF experiments originally started with microcosms in the laboratory and with grassland ecosystems. A new frontier in experimental BEF research is manipulating tree diversity in forest ecosystems, compelling researchers to think big and comprehensively. 2. We present and discuss some of the major issues to be considered in the design of BEF experiments with trees and illustrate these with a new forest biodiversity experiment established in subtropical China (Xingangshan, Jiangxi Province) in 2009/2010. Using a pool of 40 tree species, extinction scenarios were simulated with tree richness levels of 1, 2, 4, 8 and 16 species on a total of 566 plots of 25.8x25.8m each. 3. The goal of this experiment is to estimate effects of tree and shrub species richness on carbon storage and soil erosion; therefore, the experiment was established on sloped terrain. The following important design choices were made: (i) establishing many small rather than fewer larger plots, (ii) using high planting density and random mixing of species rather than lower planting density and patchwise mixing of species, (iii) establishing a map of the initial ecoscape' to characterize site heterogeneity before the onset of biodiversity effects and (iv) manipulating tree species richness not only in random but also in trait-oriented extinction scenarios. 4. Data management and analysis are particularly challenging in BEF experiments with their hierarchical designs nesting individuals within-species populations within plots within-species compositions. Statistical analysis best proceeds by partitioning these random terms into fixed-term contrasts, for example, species composition into contrasts for species richness and the presence of particular functional groups, which can then be tested against the remaining random variation among compositions. 5. We conclude that forest BEF experiments provide exciting and timely research options. They especially require careful thinking to allow multiple disciplines to measure and analyse data jointly and effectively. Achieving specific research goals and synergy with previous experiments involves trade-offs between different designs and requires manifold design decisions.
Resumo:
Prospective memory involves the self-initiated retrieval of an intention upon an appropriate retrieval cue. Cue identification can be considered as an orienting reaction and may thus trigger a psychophysiological response. Here we present two experiments in which skin conductance responses (SCRs) elicited by prospective memory cues were compared to SCRs elicited by aversive stimuli to test whether a single prospective memory cue triggers a similar SCR as an aversive stimulus. In Experiment 2 we also assessed whether cue specificity had a differential influence on prospective memory performance and on SCRs. We found that detecting a single prospective memory cue is as likely to elicit a SCR as an aversive stimulus. Missed prospective memory cues also elicited SCRs. On a behavioural level, specific intentions led to better prospective memory performance. However, on a psychophysiological level specificity had no influence. More generally, the results indicate reliable SCRs for prospective memory cues and point to psychophysiological measures as valuable approach, which offers a new way to study one-off prospective memory tasks. Moreover, the findings are consistent with a theory that posits multiple prospective memory retrieval stages.
Resumo:
The occurrence of gaseous pollutants in soils has stimulated many experimental activities, including forced ventilation in the field as well as laboratory transport experiments with gases. The dispersion coefficient in advective-dispersive gas phase transport is often dominated by molecular diffusion, which leads to a large overall dispersivity gamma. Under such conditions it is important to distinguish between flux and resident modes of solute injection and detection. The influence of the inlet type oil the macroscopic injection mode was tested in two series of column experiments with gases at different mean flow velocities nu. First we compared infinite resident and flux injections, and second, semi-infinite resident and flux injections. It is shown that the macroscopically apparent injection condition depends on the geometry of the inlet section. A reduction of the cross-sectional area of the inlet relative to that of the column is very effective in excluding the diffusive solute input, thus allowing us to use the solutions for a flux Injection also at rather low mean flow velocities nu. If the whole cross section of a column is exposed to a large reservoir like that of ambient air, a semi-infinite resident injection is established, which can be distinguished from a flux injection even at relatively high velocities nu, depending on the mechanical dispersivity of the porous medium.
Resumo:
In situ diffusion experiments are performed in geological formations at underground research laboratories to overcome the limitations of laboratory diffusion experiments and investigate scale effects. Tracer concentrations are monitored at the injection interval during the experiment (dilution data) and measured from host rock samples around the injection interval at the end of the experiment (overcoring data). Diffusion and sorption parameters are derived from the inverse numerical modeling of the measured tracer data. The identifiability and the uncertainties of tritium and Na-22(+) diffusion and sorption parameters are studied here by synthetic experiments having the same characteristics as the in situ diffusion and retention (DR) experiment performed on Opalinus Clay. Contrary to previous identifiability analyses of in situ diffusion experiments, which used either dilution or overcoring data at approximate locations, our analysis of the parameter identifiability relies simultaneously on dilution and overcoring data, accounts for the actual position of the overcoring samples in the claystone, uses realistic values of the standard deviation of the measurement errors, relies on model identification criteria to select the most appropriate hypothesis about the existence of a borehole disturbed zone and addresses the effect of errors in the location of the sampling profiles. The simultaneous use of dilution and overcoring data provides accurate parameter estimates in the presence of measurement errors, allows the identification of the right hypothesis about the borehole disturbed zone and diminishes other model uncertainties such as those caused by errors in the volume of the circulation system and the effective diffusion coefficient of the filter. The proper interpretation of the experiment requires the right hypothesis about the borehole disturbed zone. A wrong assumption leads to large estimation errors. The use of model identification criteria helps in the selection of the best model. Small errors in the depth of the overcoring samples lead to large parameter estimation errors. Therefore, attention should be paid to minimize the errors in positioning the depth of the samples. The results of the identifiability analysis do not depend on the particular realization of random numbers. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
The diffusion properties of the Opalinus Clay were studied in the underground research laboratory at Mont Terri (Canton Jura, Switzerland) and the results were compared with diffusion data measured in the laboratory on small-scale samples. The diffusion of HTO, Na-22(+), Cs+ and I- were investigated for a period of 10 months. The diffusion equipment used in the field experiment was designed in such a way that a solution of tracers was circulated through a sintered metal screen placed at the end of a borehole drilled in the formation. The concentration decrease caused by the diffusion of tracers into the rock could be followed with time and allowed first estimations of the effective diffusion coefficient. After 10 months, the diffusion zone was over-cored and the tracer profiles measured. From these profiles, effective diffusion coefficients and rock capacity factors Could be extracted by applying a two-dimensional transport model including diffusion and sorption. The simulations were done with the reactive transport code CRUNCH. In addition, results obtained from through-diffusion experiments oil small-sized samples with HTO, Cl-36(-) and Na-22(+) are presented and compared with the in situ data. In all cases. excellent agreement between the two data sets exists. Results for Cs+ indicated five times higher diffusion rates relative to HTO. Corresponding laboratory diffusion measurements are still lacking. However. our Cs+ data are in qualitative agreement wish through-diffusion data for Callovo-Oxfordian argillite rock samples. which also indicate significantly higher effective diffusivities for Cs+ relative to HTO.
Resumo:
Monte Carlo simulations arrive at their results by introducing randomness, sometimes derived from a physical randomizing device. Nonetheless, we argue, they open no new epistemic channels beyond that already employed by traditional simulations: the inference by ordinary argumentation of conclusions from assumptions built into the simulations. We show that Monte Carlo simulations cannot produce knowledge other than by inference, and that they resemble other computer simulations in the manner in which they derive their conclusions. Simple examples of Monte Carlo simulations are analysed to identify the underlying inferences.
Resumo:
The paper argues for a distinction between sensory-and conceptual-information storage in the human information-processing system. Conceptual information is characterized as meaningful and symbolic, while sensory information may exist in modality-bound form. Furthermore, it is assumed that sensory information does not contribute to conscious remembering and can be used only in data-driven process reptitions, which can be accompanied by a kind of vague or intuitive feeling. Accordingly, pure top-down and willingly controlled processing, such as free recall, should not have any access to sensory data. Empirical results from different research areas and from two experiments conducted by the authors are presented in this article to support these theoretical distinctions. The experiments were designed to separate a sensory-motor and a conceptual component in memory for two-digit numbers and two-letter items, when parts of the numbers or items were imaged or drawn on a tablet. The results of free recall and recognition are discussed in a theoretical framework which distinguishes sensory and conceptual information in memory.
Resumo:
On the Limits of Greenwich Mean Time, or The Failure of a Modernist Revolution From the introduction of World Standard Time in 1884 to Einstein’s theory of relativity, the nature and regulation of time was a highly contested issue in modernism, with profound political, social and epistemological consequences. Modernist aesthetic sensibilities widely revolted against the increasingly strict rule of the clock, which, as Georg Simmel observed in “The Metropolis and Mental Life,” was established as the necessary basis of a capitalist, urban life. This paper will focus on the contending conceptions of time arising in key modernist texts by authors like Joyce, Woolf and Conrad. I will argue that the uniformity and regularity of time necessary to a rising capitalist society came under attack in a similar way by both modernist literary aesthetics and new scientific discoveries. However, while Einstein’s theory of relativity may have led to a subsequent change of paradigm in scientific thought, it has failed to significantly alter social and popular conceptions of time. Although alternative ways of thinking and living with time are proposed by modernist authors, they remain isolated aesthetic experiments, ineffectual against the regulatory pressure of economic and social structures. In this struggle about the nature of time, so I suggest, science and literature join force against a society that is increasingly governed by economic reason. The fact that they lost this struggle can serve as a striking illustration of an increasing shift of social influence from science and art towards economy.
Resumo:
Ovine bone marrow-derived macrophages (BMM) may express several IgG receptor (Fc gamma receptor; FcR) subsets. To study this, model particles (opsonized erythrocytes; EA), which are selectively handled by certain FcR subsets of human macrophages were used in cross-inhibition studies and found to react in a similar manner with FcR subsets of sheep macrophages. In experiments with monoclonal antibodies against subsets of human FcR, human erythrocytes (E) treated with human anti-D-IgG (anti-D-EAhu) and sheep E treated with bovine IgG1 (Bo1-EAs) were handled selectively by human macrophage FcRI and FcRII, respectively. Rabbit-IgG-coated sheep E (Rb-EAs) were recognized by FcRI, FcRII and possibly also by FcRIII of human macrophages. Anti-D-EAhu, Bo1-EAs and Rb-EAs were also ingested by sheep BMM. Competitive inhibition tests, using various homologous and heterologous IgG isotypes as fluid phase inhibitors and the particles used as FcR-specific tools in man (anti-D-EAhu and Bo1-EAs), revealed a heterogeneity of FcR also in sheep BMM. Thus, ingestion of anti-D-EAhu by ovine BMM was inhibited by low concentrations of competitor IgG from rabbit or man in the fluid phase, but not at all by bovine IgG1, whereas ingestion of Bo1-EAs was inhibited by bovine IgG1. This suggested that anti-D-EAhu were recognized by a FcR subset distinct from that recognizing bovine-IgG1. It was concluded that sheep BMM express functional analogs of human macrophage FcRI and FcRII and that Bo1-EAs and anti-D-EAhu are handled by distinct subsets of BMM FcR. All EAhu tested (EAhu treated with anti-D, sheep IgG1 or sheep IgG2) were ingested to a lower degree than EAs. This inefficient phagocytosis could be enhanced by treatment of EAhu with antiglobulin from the rabbit, suggesting that it is caused by a low degree of activity of opsonizing antibodies rather than special properties of the erythrocytes themselves. Several lines of evidence suggested that both FcR subsets of ovine BMM recognize both ovine IgG1 and IgG2. In contrast, bovine IgG1 reacts with one FcR subset and bovine IgG2 interacts inefficiently with all FcR of ovine BMM.
Resumo:
Over the past years, in numerous studies the DNA double helix serves as a scaffold for the controlled arrangement of functional molecules, including a wide range of different chromophores. Other nucleic acid structures like the DNA three-way junction have been exploited for this purpose as well. Recently, the successful development of DNA-based light-harvesting antenna systems have been reported. Herein, we describe the use of the DNA three-way junction (3WJ) as a versatile scaffold for the modular construction of an artificial light harvesting complex (LHC). The LHC is based on a modular construction in which a phenanthrene antenna is located in one of the three stems and the acceptor is brought into proximity of the antenna through the annealing of the third strand. Phenanthrene excitation (320 nm) is followed by energy transfer to pyrene (resulting in exciplex emission), perylenediimide (quencher) or a cyanine dye (cyanine fluorescence).
Resumo:
Stepwise uncertainty reduction (SUR) strategies aim at constructing a sequence of points for evaluating a function f in such a way that the residual uncertainty about a quantity of interest progressively decreases to zero. Using such strategies in the framework of Gaussian process modeling has been shown to be efficient for estimating the volume of excursion of f above a fixed threshold. However, SUR strategies remain cumbersome to use in practice because of their high computational complexity, and the fact that they deliver a single point at each iteration. In this article we introduce several multipoint sampling criteria, allowing the selection of batches of points at which f can be evaluated in parallel. Such criteria are of particular interest when f is costly to evaluate and several CPUs are simultaneously available. We also manage to drastically reduce the computational cost of these strategies through the use of closed form formulas. We illustrate their performances in various numerical experiments, including a nuclear safety test case. Basic notions about kriging, auxiliary problems, complexity calculations, R code, and data are available online as supplementary materials.
Resumo:
BACKGROUND Treatment planning of localised prostate cancer remains challenging. Besides conventional parameters, a wealth of prognostic biomarkers has been proposed so far. None of which, however, have successfully been implemented in a routine setting so far. The aim of our study was to systematically verify a set of published prognostic markers for prostate cancer. METHODS Following an in-depth PubMed search, 28 markers were selected that have been proposed as multivariate prognostic markers for primary prostate cancer. Their prognostic validity was examined in a radical prostatectomy cohort of 238 patients with a median follow-up of 60 months and biochemical progression as endpoint of the analysis. Immunohistochemical evaluation was performed using previously published cut-off values, but allowing for optimisation if necessary. Univariate and multivariate Cox regression were used to determine the prognostic value of biomarkers included in this study. RESULTS Despite the application of various cut-offs in the analysis, only four (14%) markers were verified as independently prognostic (AKT1, stromal AR, EZH2, and PSMA) for PSA relapse following radical prostatectomy. CONCLUSIONS Apparently, many immunohistochemistry-based studies on prognostic markers seem to be over-optimistic. Codes of best practice, such as the REMARK guidelines, may facilitate the performance of conclusive and transparent future studies.
Resumo:
Transport of radioactive iodide 131I− in a structured clay loam soil under maize in a final growing phase was monitored during five consecutive irrigation experiments under ponding. Each time, 27 mm of water were applied. The water of the second experiment was spiked with 200 MBq of 131I− tracer. Its activity was monitored as functions of depth and time with Geiger-Müller (G-M) detectors in 11 vertically installed access tubes. The aim of the study was to widen our current knowledge of water and solute transport in unsaturated soil under different agriculturally cultivated settings. It was supposed that the change in 131I− activity (or counting rate) is proportional to the change in soil water content. Rapid increase followed by a gradual decrease in 131I− activity occurred at all depths and was attributed to preferential flow. The iodide transport through structured soil profile was simulated by the HYDRUS 1D model. The model predicted relatively deep percolation of iodide within a short time, in a good agreement with the observed vertical iodide distribution in soil. We found that the top 30 cm of the soil profile is the most vulnerable layer in terms of water and solute movement, which is the same depth where the root structure of maize can extend.
Resumo:
At first sight, experimenting and modeling form two distinct modes of scientific inquiry. This spurs philosophical debates about how the distinction should be drawn (e.g. Morgan 2005, Winsberg 2009, Parker 2009). But much scientific practice casts serious doubts on the idea that the distinction makes much sense. There are two worries. First, the practices of modeling and experimenting are often intertwined in intricate ways because much modeling involves experimenting, and the interpretation of many experiments relies upon models. Second, there are borderline cases that seem to blur the distinction between experiment and model (if there is any). My talk tries to defend the philosophical project of distinguishing models from experiment and to advance the related philosophical debate. I begin with providing a minimalist framework of conceptualizing experimenting and modeling and their mutual relationships. The methods are conceptualized as different types of activities that are characterized by a primary goal, respectively. The minimalist framwork, which should be uncontroversial, suffices to accommodate the first worry. I address the second worry by suggesting several ways how to conceptualize the distinction in a more flexible way. I make a concrete suggestion of how the distinction may be drawn. I use examples from the history of science to argue my case. The talk concentrates and models and experiments, but I will comment on simulations too.
Resumo:
Most organisms are able to synthesize vitamin C whereas humans are not. In order to contribute to the elucidation of the molecular working mechanism of vitamin C transport through biological membranes, we cloned, overexpressed, purified, functionally characterized, and 2D- and 3D-crystallized a bacterial protein (UraDp) with 29% of amino acid sequence identity to the human sodium-dependent vitamin C transporter 1 (SVCT1). Ligand-binding experiments by scintillation proximity assay revealed that uracil is a substrate preferably bound to UraDp. For structural analysis, we report on the production of tubular 2D crystals and present a first projection structure of UraDp from negatively stained tubes. On the other hand the successful growth of UraDp 3D crystals and their crystallographic analysis is described. These 3D crystals, which diffract X-rays to 4.2Å resolution, pave the way towards the high-resolution crystal structure of a bacterial homologue with high amino acid sequence identity to human SVCT1.