68 resultados para Experiments.
Resumo:
1. Biodiversity-ecosystem functioning (BEF) experiments address ecosystem-level consequences of species loss by comparing communities of high species richness with communities from which species have been gradually eliminated. BEF experiments originally started with microcosms in the laboratory and with grassland ecosystems. A new frontier in experimental BEF research is manipulating tree diversity in forest ecosystems, compelling researchers to think big and comprehensively. 2. We present and discuss some of the major issues to be considered in the design of BEF experiments with trees and illustrate these with a new forest biodiversity experiment established in subtropical China (Xingangshan, Jiangxi Province) in 2009/2010. Using a pool of 40 tree species, extinction scenarios were simulated with tree richness levels of 1, 2, 4, 8 and 16 species on a total of 566 plots of 25.8x25.8m each. 3. The goal of this experiment is to estimate effects of tree and shrub species richness on carbon storage and soil erosion; therefore, the experiment was established on sloped terrain. The following important design choices were made: (i) establishing many small rather than fewer larger plots, (ii) using high planting density and random mixing of species rather than lower planting density and patchwise mixing of species, (iii) establishing a map of the initial ecoscape' to characterize site heterogeneity before the onset of biodiversity effects and (iv) manipulating tree species richness not only in random but also in trait-oriented extinction scenarios. 4. Data management and analysis are particularly challenging in BEF experiments with their hierarchical designs nesting individuals within-species populations within plots within-species compositions. Statistical analysis best proceeds by partitioning these random terms into fixed-term contrasts, for example, species composition into contrasts for species richness and the presence of particular functional groups, which can then be tested against the remaining random variation among compositions. 5. We conclude that forest BEF experiments provide exciting and timely research options. They especially require careful thinking to allow multiple disciplines to measure and analyse data jointly and effectively. Achieving specific research goals and synergy with previous experiments involves trade-offs between different designs and requires manifold design decisions.
Resumo:
For several years now, neuroscientific research has been striving towards fundamental answers to questions about the relevance of sex/gender to language processing in the brain. This research has been effected through the search for sex/gender differences in the neurobiology of language processing. Thus, the main aim has ever been to focus on the differentiation of the sexes/genders, failing to define what sex, what gender, what female or male is in neurolingustic research. In other words, although neuroscientific findings have provided key insights into the brain functioning of women and men, neuropsychology has rarely questioned the complexity of the sex/gender variable beyond biology. What does “female” or “male” mean in human neurocognition; how are operationalisations implemented along the axes of “femaleness” or “maleness”; or what biological evidence is used to register the variables sex and/or gender? In the neurosciences as well as in neurocognitive research, questions such as these have so far not been studied in detail, even if they are highly significant for the scientific process. Instead, the variable of sex/gender has always been thought as solely dichotomous (as either female or male), oppositional and exclusionary of each other. Here, this theoretical contribution sets in. Based on findings in neuroscience and concepts in gender theory, this poster is dedicated to the reflection about what sex/gender is in the neuroscience of language processing. Following this aim, two levels of interest will be addressed. First: How do we define sex/gender at the level of participants? And second: How do we define sex/gender at the level of the experimental task? For the first, a multifactorial registration (work in progress) of the variable sex/gender will be presented, i.e. a tool that records sex/gender in terms of biology and social issues as well as on a spectrum between femaleness and maleness. For the second, the compulsory dichotomy of a gendered task when neurolinguistically approaching our cognitions of sex/gender will be explored.
Resumo:
We have measured the bidirectional reflectance of spherical micrometer-sized water-ice particles in the visible spectral range over a wide range of incidence and emission angles. The small ice spheres were produced by spraying fine water droplets directly into liquid nitrogen. The resulting mean particle radii are 1.47 + 0.96 - 0.58 μm. Such a material shares many properties with ice in comets and at the surface of icy satellites. Measurements show that the fresh sample material is highly backscattering, contrasting with natural terrestrial snow and frost. The formation of agglomerates of particles during the sample production results in a noticeable variability of the photometric properties of the samples in their initial state. We have also observed significant temporal evolutions of the scattering behavior of the samples, shifting towards more forward scattering within some tens of hours, resulting most likely from sintering processes. All reflectance data are fitted by the Hapke photometric model (1993 and 2002 formulation) with a one/two/three-parameter Henyey-Greenstein phase function and the resulting Hapke parameters are provided. These parameters can be used to compare laboratory results with the observed photometric behaviors of astronomical objects. We show, in particular, that the optical properties of the fresh micrometer-sized ice samples can be used to reproduce the predominant backscattering in the phase curves of Enceladus and Europa.
Resumo:
Neurotensin(8-13) (NTS(8-13)) analogs with C- and/or N-terminal β-amino acid residues and three DOTA derivatives thereof have been synthesized (i.e., 1-6). A virtual docking experiment showed almost perfect fit of one of the 1,4,7,10-tetraazacyclododecane-1,4,7,10-tetraacetic acid (DOTA) derivatives, 6a, into a crystallographically identified receptor NTSR1 (Fig.1). The affinities for the receptors of the NTS analogs and derivatives are low, when determined with cell-membrane homogenates, while, with NTSR1-exhibiting cancer tissues, affinities in the single-digit nanomolar range can be observed (Table 2). Most of the β-amino acid-containing NTS(8-13) analogs (Table 1 and Fig.2), including the (68) Ga complexes of the DOTA-substituted ones (6; Figs.2 and 5), are stable for ca. 1 h in human serum and plasma, and in murine plasma. The biodistributions of two (68) Ga complexes (of 6a and 6b) in HT29 tumor-bearing nude mice, in the absence and in the presence of a blocking compound, after 10, 30, and 60 min (Figs. 3 and 4) lead to the conclusion that the amount of specifically bound radioligand is rather low. This was confirmed by PET-imaging experiments with the tumor-bearing mice (Fig.6). Comparison of the in vitro plasma stability (after 1 h) with the ex vivo blood content (after 10-15 min) of the two (68) Ga complexes shows that they are rapidly cleaved in the animals (Fig.5).
Resumo:
The occurrence of gaseous pollutants in soils has stimulated many experimental activities, including forced ventilation in the field as well as laboratory transport experiments with gases. The dispersion coefficient in advective-dispersive gas phase transport is often dominated by molecular diffusion, which leads to a large overall dispersivity gamma. Under such conditions it is important to distinguish between flux and resident modes of solute injection and detection. The influence of the inlet type oil the macroscopic injection mode was tested in two series of column experiments with gases at different mean flow velocities nu. First we compared infinite resident and flux injections, and second, semi-infinite resident and flux injections. It is shown that the macroscopically apparent injection condition depends on the geometry of the inlet section. A reduction of the cross-sectional area of the inlet relative to that of the column is very effective in excluding the diffusive solute input, thus allowing us to use the solutions for a flux Injection also at rather low mean flow velocities nu. If the whole cross section of a column is exposed to a large reservoir like that of ambient air, a semi-infinite resident injection is established, which can be distinguished from a flux injection even at relatively high velocities nu, depending on the mechanical dispersivity of the porous medium.
Resumo:
In situ diffusion experiments are performed in geological formations at underground research laboratories to overcome the limitations of laboratory diffusion experiments and investigate scale effects. Tracer concentrations are monitored at the injection interval during the experiment (dilution data) and measured from host rock samples around the injection interval at the end of the experiment (overcoring data). Diffusion and sorption parameters are derived from the inverse numerical modeling of the measured tracer data. The identifiability and the uncertainties of tritium and Na-22(+) diffusion and sorption parameters are studied here by synthetic experiments having the same characteristics as the in situ diffusion and retention (DR) experiment performed on Opalinus Clay. Contrary to previous identifiability analyses of in situ diffusion experiments, which used either dilution or overcoring data at approximate locations, our analysis of the parameter identifiability relies simultaneously on dilution and overcoring data, accounts for the actual position of the overcoring samples in the claystone, uses realistic values of the standard deviation of the measurement errors, relies on model identification criteria to select the most appropriate hypothesis about the existence of a borehole disturbed zone and addresses the effect of errors in the location of the sampling profiles. The simultaneous use of dilution and overcoring data provides accurate parameter estimates in the presence of measurement errors, allows the identification of the right hypothesis about the borehole disturbed zone and diminishes other model uncertainties such as those caused by errors in the volume of the circulation system and the effective diffusion coefficient of the filter. The proper interpretation of the experiment requires the right hypothesis about the borehole disturbed zone. A wrong assumption leads to large estimation errors. The use of model identification criteria helps in the selection of the best model. Small errors in the depth of the overcoring samples lead to large parameter estimation errors. Therefore, attention should be paid to minimize the errors in positioning the depth of the samples. The results of the identifiability analysis do not depend on the particular realization of random numbers. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
Both historical and idealized climate model experiments are performed with a variety of Earth system models of intermediate complexity (EMICs) as part of a community contribution to the Intergovernmental Panel on Climate Change Fifth Assessment Report. Historical simulations start at 850 CE and continue through to 2005. The standard simulations include changes in forcing from solar luminosity, Earth's orbital configuration, CO2, additional greenhouse gases, land use, and sulphate and volcanic aerosols. In spite of very different modelled pre-industrial global surface air temperatures, overall 20th century trends in surface air temperature and carbon uptake are reasonably well simulated when compared to observed trends. Land carbon fluxes show much more variation between models than ocean carbon fluxes, and recent land fluxes appear to be slightly underestimated. It is possible that recent modelled climate trends or climate–carbon feedbacks are overestimated resulting in too much land carbon loss or that carbon uptake due to CO2 and/or nitrogen fertilization is underestimated. Several one thousand year long, idealized, 2 × and 4 × CO2 experiments are used to quantify standard model characteristics, including transient and equilibrium climate sensitivities, and climate–carbon feedbacks. The values from EMICs generally fall within the range given by general circulation models. Seven additional historical simulations, each including a single specified forcing, are used to assess the contributions of different climate forcings to the overall climate and carbon cycle response. The response of surface air temperature is the linear sum of the individual forcings, while the carbon cycle response shows a non-linear interaction between land-use change and CO2 forcings for some models. Finally, the preindustrial portions of the last millennium simulations are used to assess historical model carbon-climate feedbacks. Given the specified forcing, there is a tendency for the EMICs to underestimate the drop in surface air temperature and CO2 between the Medieval Climate Anomaly and the Little Ice Age estimated from palaeoclimate reconstructions. This in turn could be a result of unforced variability within the climate system, uncertainty in the reconstructions of temperature and CO2, errors in the reconstructions of forcing used to drive the models, or the incomplete representation of certain processes within the models. Given the forcing datasets used in this study, the models calculate significant land-use emissions over the pre-industrial period. This implies that land-use emissions might need to be taken into account, when making estimates of climate–carbon feedbacks from palaeoclimate reconstructions.
Resumo:
The combination of scaled analogue experiments, material mechanics, X-ray computed tomography (XRCT) and Digital Volume Correlation techniques (DVC) is a powerful new tool not only to examine the 3 dimensional structure and kinematic evolution of complex deformation structures in scaled analogue experiments, but also to fully quantify their spatial strain distribution and complete strain history. Digital image correlation (DIC) is an important advance in quantitative physical modelling and helps to understand non-linear deformation processes. Optical non-intrusive (DIC) techniques enable the quantification of localised and distributed deformation in analogue experiments based either on images taken through transparent sidewalls (2D DIC) or on surface views (3D DIC). X-ray computed tomography (XRCT) analysis permits the non-destructive visualisation of the internal structure and kinematic evolution of scaled analogue experiments simulating tectonic evolution of complex geological structures. The combination of XRCT sectional image data of analogue experiments with 2D DIC only allows quantification of 2D displacement and strain components in section direction. This completely omits the potential of CT experiments for full 3D strain analysis of complex, non-cylindrical deformation structures. In this study, we apply digital volume correlation (DVC) techniques on XRCT scan data of “solid” analogue experiments to fully quantify the internal displacement and strain in 3 dimensions over time. Our first results indicate that the application of DVC techniques on XRCT volume data can successfully be used to quantify the 3D spatial and temporal strain patterns inside analogue experiments. We demonstrate the potential of combining DVC techniques and XRCT volume imaging for 3D strain analysis of a contractional experiment simulating the development of a non-cylindrical pop-up structure. Furthermore, we discuss various options for optimisation of granular materials, pattern generation, and data acquisition for increased resolution and accuracy of the strain results. Three-dimensional strain analysis of analogue models is of particular interest for geological and seismic interpretations of complex, non-cylindrical geological structures. The volume strain data enable the analysis of the large-scale and small-scale strain history of geological structures.
Resumo:
Monte Carlo simulations arrive at their results by introducing randomness, sometimes derived from a physical randomizing device. Nonetheless, we argue, they open no new epistemic channels beyond that already employed by traditional simulations: the inference by ordinary argumentation of conclusions from assumptions built into the simulations. We show that Monte Carlo simulations cannot produce knowledge other than by inference, and that they resemble other computer simulations in the manner in which they derive their conclusions. Simple examples of Monte Carlo simulations are analysed to identify the underlying inferences.
Resumo:
Previous studies of the sediments of Lake Lucerne have shown that massive subaqueous mass movements affecting unconsolidated sediments on lateral slopes are a common process in this lake, and, in view of historical reports describing damaging waves on the lake, it was suggested that tsunamis generated by mass movements represent a considerable natural hazard on the lakeshores. Newly performed numerical simulations combining two-dimensional, depth-averaged models for mass-movement propagation and for tsunami generation, propagation and inunda- tion reproduce a number of reported tsunami effects. Four analysed mass-movement scenarios—three based on documented slope failures involving volumes of 5.5 to 20.8 9 106 m3—show peak wave heights of several metres and maximum runup of 6 to [10 m in the directly affected basins, while effects in neighbouring basins are less drastic. The tsunamis cause large-scale inundation over distances of several hundred metres on flat alluvial plains close to the mass-movement source areas. Basins at the ends of the lake experience regular water-level oscillations with characteristic periods of several minutes. The vulnerability of potentially affected areas has increased dramatically since the times of the damaging historical events, recommending a thorough evaluation of the hazard.
Resumo:
Transport of radioactive iodide 131I− in a structured clay loam soil under maize in a final growing phase was monitored during five consecutive irrigation experiments under ponding. Each time, 27 mm of water were applied. The water of the second experiment was spiked with 200 MBq of 131I− tracer. Its activity was monitored as functions of depth and time with Geiger-Müller (G-M) detectors in 11 vertically installed access tubes. The aim of the study was to widen our current knowledge of water and solute transport in unsaturated soil under different agriculturally cultivated settings. It was supposed that the change in 131I− activity (or counting rate) is proportional to the change in soil water content. Rapid increase followed by a gradual decrease in 131I− activity occurred at all depths and was attributed to preferential flow. The iodide transport through structured soil profile was simulated by the HYDRUS 1D model. The model predicted relatively deep percolation of iodide within a short time, in a good agreement with the observed vertical iodide distribution in soil. We found that the top 30 cm of the soil profile is the most vulnerable layer in terms of water and solute movement, which is the same depth where the root structure of maize can extend.
Resumo:
The aim of this study was to improve cage systems for maintaining adult honey bee (Apis mellifera L.) workers under in vitro laboratory conditions. To achieve this goal, we experimentally evaluated the impact of different cages, developed by scientists of the international research network COLOSS (Prevention of honey bee COlony LOSSes), on the physiology and survival of honey bees. We identified three cages that promoted good survival of honey bees. The bees from cages that exhibited greater survival had relatively lower titers of deformed wing virus, suggesting that deformed wing virus is a significant marker reflecting stress level and health status of the host. We also determined that a leak- and drip-proof feeder was an integral part of a cage system and a feeder modified from a 20-ml plastic syringe displayed the best result in providing steady food supply to bees. Finally, we also demonstrated that the addition of protein to the bees' diet could significantly increase the level ofvitellogenin gene expression and improve bees' survival. This international collaborative study represents a critical step toward improvement of cage designs and feeding regimes for honey bee laboratory experiments.