918 resultados para Optimal Sampling Time
Resumo:
The Tara Oceans Expedition (2009-2013) sampled the world oceans on board a 36 m long schooner, collecting environmental data and organisms from viruses to planktonic metazoans for later analyses using modern sequencing and state-of-the-art imaging technologies. Tara Oceans Data are particularly suited to study the genetic, morphological and functional diversity of plankton. The present data publication contains measurements from the Continuous Surface Sampling System [CSSS] made during one campaign of the Tara Oceans Expedition. Water was pumped at the front of the vessel from ~2m depth, then de-bubbled and circulated to a Sea-Bird TSG temperature and conductivity sensor. System maintenance (instrument cleaning, flushing) was done approximately once a week and in port between successive legs. All data were stamped with a GPS.
Resumo:
The Tara Oceans Expedition (2009-2013) sampled the world oceans on board a 36 m long schooner, collecting environmental data and organisms from viruses to planktonic metazoans for later analyses using modern sequencing and state-of-the-art imaging technologies. Tara Oceans Data are particularly suited to study the genetic, morphological and functional diversity of plankton. The present data publication contains measurements from the Continuous Surface Sampling System [CSSS] made during one campaign of the Tara Oceans Expedition. Water was pumped at the front of the vessel from ~2m depth, then de-bubbled and circulated to a Sea-Bird TSG temperature and conductivity sensor. System maintenance (instrument cleaning, flushing) was done approximately once a week and in port between successive legs. All data were stamped with a GPS.
Resumo:
Archaeological fish otoliths have the potential to serve as proxies for both season of site occupation and palaeoclimate conditions. By sampling along the distinctive sub-annual seasonal bands of the otolith and completing a stable isotope (δ¹⁸O, δ¹³C) analysis, variations within the fish’s environment can be identified. Through the analysis of cod otoliths from two archaeological sites on Kiska Island, Gertrude Cove (KIS-010) and Witchcraft Point (KIS-005), this research evaluates a micromilling methodological approach to extracting climatic data from archaeological cod otoliths. In addition, δ¹⁸Ootolith data and radiocarbon dates frame a discussion of Pacific cod harvesting, site occupation, and changing climatic conditions on Kiska Island. To aid in the interpretation of the archaeological Pacific cod results, archaeological and modern Atlantic cod otoliths were also analyzed as a component of this study to develop. The Atlantic cod otoliths provided the methodological and interpretative framework for the study, and also served to assess the efficacy of this sampling strategy for archaeological materials and to add time-depth to existing datasets. The δ¹⁸Ootolith values successfully illustrate relative variation in ambient water temperature. The Pacific cod δ¹⁸O values demonstrate a weak seasonal signal identifiable up to year 3, followed by relatively stable values until year 6/7 when values continuously increase. Based on the δ¹⁸O values, the Pacific cod were exposed to the coldest water temperatures immediately prior to capture. The lack of a clear cycle of seasonal variation and the continued increase in values towards the otolith edge obscures the season of capture, and indicates that other behavioural, environmental, or methodological factors influenced the otolith δ¹⁸O values. It is suggested that Pacific cod would have been harvested throughout the year, and the presence of cod remains in Aleutian archaeological sites cannot be used as a reliable indicator of summer occupation. In addition, when the δ¹⁸O otolith values are integrated with radiocarbon dates and known climatic regimes, it is demonstrated that climatic conditions play an integral role in the pattern of occupation at Gertrude Cove. Initial site occupation coincides with the end of a neoglacial cooling period, and the most recent and continuous occupation coincides with the end of a localized warming period and the onset of the Little Ice Age (LIA).
Resumo:
The first Air Chemistry Observatory at the German Antarctic station Georg von Neumayer (GvN) was operated for 10 years from 1982 to 1991. The focus of the established observational programme was on characterizing the physical properties and chemical composition of the aerosol, as well as on monitoring the changing trace gas composition of the background atmosphere, especially concerning greenhouse gases. The observatory was designed by the Institut für Umweltphysik, University of Heidelberg (UHEIIUP). The experiments were installed inside the bivouac lodge, mounted on a sledge and put upon a snow hill to prevent snow accumulation during blizzards. All experiments were under daily control and daily performance protocols were documented. A ventilated stainless steel inlet stack (total height about 3-4 m above the snow surface) with a 50% aerodynamic cut-off diameter around 7-10 µm at wind velocities between 4-10 m/s supplied all experiments with ambient air. Contamination free sampling was realized by several means: (i) The Air Chemistry Observatory was situated in a clean air area about 1500 m south of GvN. Due to the fact that northern wind directions are very rare, contamination from the base can be excluded for most of the time. (ii) The power supply (20 kW) is provided by a cable from the main station, thus no fuel-driven generator is operated in the very vicinity. (iii) Contamination-free sampling is controlled by the permanently recorded wind velocity, wind direction and by condensation particle concentration. Contamination was indicated if one of the following criteria were given: Wind direction within a 330°-30° sector, wind velocity <2.2 m/s or >17.5 m/s, or condensation particle concentrations >2500/cm**3 during summer, >800/cm**3 during spring/autumn and >400/cm**3 during winter. If one or a definable combination of these criteria were given, high volume aerosol sampling and part of the trace gas sampling were interrupted. Starting at 1982 through 1991-01-14 surface ozone was measured with an electrochemical concentration cell (ECC). Surface ozone mixing ratio are given in ppbv = parts per 10**9 by volume. The averaging time corresponds to the given time intervals in the data sheet. The accuracy of the values are better than ±1 ppbv and the detection limit is around 1.0 ppbv. Aerosols were sampled on two Whatman 541 cellulose filters in series and analyzed by ion chromatography at the UHEI-IUP. Generally, the sampling period was seven days but could be up to two weeks on occasion. The air flow was around 100 m**3/h and typically 10000-20000 m**3 of ambient air was forced through the filters for one sample. Concentration values are given in nanogram (ng) per 1 m**3 air at standard pressure and temperature (1013 mbar, 273.16 K). Uncertainties of the values were approximately ±10% to ±15% for the main components MSA, chloride, nitrate, sulfate and sodium, and between ±20% and ±30% for the minor species bromide, ammonium, potassium, magnesium and calcium.
Resumo:
Piotr Omenzetter and Simon Hoell's work within the Lloyd's Register Foundation Centre for Safety and Reliability Engineering at the University of Aberdeen is supported by Lloyd’s Register Foundation. The Foundation helps to protect life and property by supporting engineering-related education, public engagement and the application of research.
Resumo:
Piotr Omenzetter and Simon Hoell's work within the Lloyd's Register Foundation Centre for Safety and Reliability Engineering at the University of Aberdeen is supported by Lloyd’s Register Foundation. The Foundation helps to protect life and property by supporting engineering-related education, public engagement and the application of research.
Resumo:
Human use of the oceans is increasingly in conflict with conservation of endangered species. Methods for managing the spatial and temporal placement of industries such as military, fishing, transportation and offshore energy, have historically been post hoc; i.e. the time and place of human activity is often already determined before assessment of environmental impacts. In this dissertation, I build robust species distribution models in two case study areas, US Atlantic (Best et al. 2012) and British Columbia (Best et al. 2015), predicting presence and abundance respectively, from scientific surveys. These models are then applied to novel decision frameworks for preemptively suggesting optimal placement of human activities in space and time to minimize ecological impacts: siting for offshore wind energy development, and routing ships to minimize risk of striking whales. Both decision frameworks relate the tradeoff between conservation risk and industry profit with synchronized variable and map views as online spatial decision support systems.
For siting offshore wind energy development (OWED) in the U.S. Atlantic (chapter 4), bird density maps are combined across species with weights of OWED sensitivity to collision and displacement and 10 km2 sites are compared against OWED profitability based on average annual wind speed at 90m hub heights and distance to transmission grid. A spatial decision support system enables toggling between the map and tradeoff plot views by site. A selected site can be inspected for sensitivity to a cetaceans throughout the year, so as to capture months of the year which minimize episodic impacts of pre-operational activities such as seismic airgun surveying and pile driving.
Routing ships to avoid whale strikes (chapter 5) can be similarly viewed as a tradeoff, but is a different problem spatially. A cumulative cost surface is generated from density surface maps and conservation status of cetaceans, before applying as a resistance surface to calculate least-cost routes between start and end locations, i.e. ports and entrance locations to study areas. Varying a multiplier to the cost surface enables calculation of multiple routes with different costs to conservation of cetaceans versus cost to transportation industry, measured as distance. Similar to the siting chapter, a spatial decisions support system enables toggling between the map and tradeoff plot view of proposed routes. The user can also input arbitrary start and end locations to calculate the tradeoff on the fly.
Essential to the input of these decision frameworks are distributions of the species. The two preceding chapters comprise species distribution models from two case study areas, U.S. Atlantic (chapter 2) and British Columbia (chapter 3), predicting presence and density, respectively. Although density is preferred to estimate potential biological removal, per Marine Mammal Protection Act requirements in the U.S., all the necessary parameters, especially distance and angle of observation, are less readily available across publicly mined datasets.
In the case of predicting cetacean presence in the U.S. Atlantic (chapter 2), I extracted datasets from the online OBIS-SEAMAP geo-database, and integrated scientific surveys conducted by ship (n=36) and aircraft (n=16), weighting a Generalized Additive Model by minutes surveyed within space-time grid cells to harmonize effort between the two survey platforms. For each of 16 cetacean species guilds, I predicted the probability of occurrence from static environmental variables (water depth, distance to shore, distance to continental shelf break) and time-varying conditions (monthly sea-surface temperature). To generate maps of presence vs. absence, Receiver Operator Characteristic (ROC) curves were used to define the optimal threshold that minimizes false positive and false negative error rates. I integrated model outputs, including tables (species in guilds, input surveys) and plots (fit of environmental variables, ROC curve), into an online spatial decision support system, allowing for easy navigation of models by taxon, region, season, and data provider.
For predicting cetacean density within the inner waters of British Columbia (chapter 3), I calculated density from systematic, line-transect marine mammal surveys over multiple years and seasons (summer 2004, 2005, 2008, and spring/autumn 2007) conducted by Raincoast Conservation Foundation. Abundance estimates were calculated using two different methods: Conventional Distance Sampling (CDS) and Density Surface Modelling (DSM). CDS generates a single density estimate for each stratum, whereas DSM explicitly models spatial variation and offers potential for greater precision by incorporating environmental predictors. Although DSM yields a more relevant product for the purposes of marine spatial planning, CDS has proven to be useful in cases where there are fewer observations available for seasonal and inter-annual comparison, particularly for the scarcely observed elephant seal. Abundance estimates are provided on a stratum-specific basis. Steller sea lions and harbour seals are further differentiated by ‘hauled out’ and ‘in water’. This analysis updates previous estimates (Williams & Thomas 2007) by including additional years of effort, providing greater spatial precision with the DSM method over CDS, novel reporting for spring and autumn seasons (rather than summer alone), and providing new abundance estimates for Steller sea lion and northern elephant seal. In addition to providing a baseline of marine mammal abundance and distribution, against which future changes can be compared, this information offers the opportunity to assess the risks posed to marine mammals by existing and emerging threats, such as fisheries bycatch, ship strikes, and increased oil spill and ocean noise issues associated with increases of container ship and oil tanker traffic in British Columbia’s continental shelf waters.
Starting with marine animal observations at specific coordinates and times, I combine these data with environmental data, often satellite derived, to produce seascape predictions generalizable in space and time. These habitat-based models enable prediction of encounter rates and, in the case of density surface models, abundance that can then be applied to management scenarios. Specific human activities, OWED and shipping, are then compared within a tradeoff decision support framework, enabling interchangeable map and tradeoff plot views. These products make complex processes transparent for gaming conservation, industry and stakeholders towards optimal marine spatial management, fundamental to the tenets of marine spatial planning, ecosystem-based management and dynamic ocean management.
Resumo:
Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here.
Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.
One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.
Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.
In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models.
Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data.
The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.
Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.
Resumo:
With increasing prevalence and capabilities of autonomous systems as part of complex heterogeneous manned-unmanned environments (HMUEs), an important consideration is the impact of the introduction of automation on the optimal assignment of human personnel. The US Navy has implemented optimal staffing techniques before in the 1990's and 2000's with a "minimal staffing" approach. The results were poor, leading to the degradation of Naval preparedness. Clearly, another approach to determining optimal staffing is necessary. To this end, the goal of this research is to develop human performance models for use in determining optimal manning of HMUEs. The human performance models are developed using an agent-based simulation of the aircraft carrier flight deck, a representative safety-critical HMUE. The Personnel Multi-Agent Safety and Control Simulation (PMASCS) simulates and analyzes the effects of introducing generalized maintenance crew skill sets and accelerated failure repair times on the overall performance and safety of the carrier flight deck. A behavioral model of four operator types (ordnance officers, chocks and chains, fueling officers, plane captains, and maintenance operators) is presented here along with an aircraft failure model. The main focus of this work is on the maintenance operators and aircraft failure modeling, since they have a direct impact on total launch time, a primary metric for carrier deck performance. With PMASCS I explore the effects of two variables on total launch time of 22 aircraft: 1) skill level of maintenance operators and 2) aircraft failure repair times while on the catapult (referred to as Phase 4 repair times). It is found that neither introducing a generic skill set to maintenance crews nor introducing a technology to accelerate Phase 4 aircraft repair times improves the average total launch time of 22 aircraft. An optimal manning level of 3 maintenance crews is found under all conditions, the point at which any additional maintenance crews does not reduce the total launch time. An additional discussion is included about how these results change if the operations are relieved of the bottleneck of installing the holdback bar at launch time.
Resumo:
A process of global importance in carbon cycling is the remineralization of algae biomass by heterotrophic bacteria, most notably during massive marine algae blooms. Such blooms can trigger secondary blooms of planktonic bacteria that consist of swift successions of distinct bacterial clades, most prominently members of the Flavobacteriia, Gammaproteobacteria and the alphaproteobacterial Roseobacter clade. This study explores such successions during spring phytoplankton blooms in the southern North Sea (German Bight) for four consecutive years. The surface water samples were taken at Helgoland Island about 40 km offshore in the southeastern North Sea in the German Bight at the station 'Kabeltonne' (54° 11.3' N, 7° 54.0' E) between the main island and the minor island, Düne (German for 'dune') using small research vessels (http://www.awi.de/en/expedition/ships/more-ships.html). Water depths at this site fluctuate from 6 to 10 m over the tidal cycle. Samples were processed as described previously (Teeling et al., 2012; doi:10.7554/eLife.11888.001) in the laboratory of the Biological Station Helgoland within less than two hours after sampling. Assessment of absolute cell numbers and bacterioplankton community composition was carried out as described previously (Thiele et al., 2011; doi:10.1016/B978-0-444-53199-5.00056-7). To obtain total cell numbers, DNA of formaldehyde fixed cells filtered on 0.2 mm pore sized filters was stained with 4',6-diamidino-2-phenylindole (DAPI). Fluorescently labeled cells were subsequently counted on filter sections using an epifluores-cence microscope. Likewise, bacterioplankton community composition was assessed by catalyzedreporter deposition fluorescence in situ hybridization (CARD-FISH) of formaldehyde fixed cells on 0.2 mm pore sized filters.
Resumo:
The Mediterranean is regarded as a region of intense climate change. To better understand future climate change, this area has been the target of several palaeoclimate studies which also studied stable isotope proxies that are directly linked to the stable isotope composition of water, such as tree rings, tooth enamel or speleothems. For such work, it is also essential to establish an isotope hydrology framework of the region of interest. Surface waters from streams and lakes as well as groundwater from springs on the island of Corsica were sampled between 2003 and 2009 for their oxygen and hydrogen isotope compositions. Isotope values from lake waters were enriched in heavier isotopes and define a local evaporation line (LEL). On the other hand, stream and spring waters reflect the isotope composition of local precipitation in the catchment. The intersection of the LEL and the linear fit of the spring and stream waters reflect the mean isotope composition of the annual precipitation (dP) with values of -8.6(±0.2) per mil for d18O and -58(±2) per mil for d2H. This value is also a good indicator of the average isotope composition of the local groundwater in the island. Surface water samples reflect the altitude isotope effect with a value of -0.17(±0.02) per mil per 100 m elevation for oxygen isotopes. At Vizzavona Pass in central Corsica, water samples from two catchments within a lateral distance of only a few hundred metres showed unexpected but systematic differences in their stable isotope composition. At this specific location, the direction of exposure seems to be an important factor. The differences were likely caused by isotopic enrichment during recharge in warm weather conditions in south-exposed valley flanks compared to the opposite, north-exposed valley flanks.
Resumo:
This data set comprises time series of aboveground community plant biomass (Sown plant community, Weed plant community, Dead plant material, and Unidentified plant material; all measured in biomass as dry weight) and species-specific biomass from the sown species of several experiments at the field site of a large grassland biodiversity experiment (the Jena Experiment; see further details below). Aboveground community biomass was normally harvested twice a year just prior to mowing (during peak standing biomass twice a year, generally in May and August; in 2002 only once in September) on all experimental plots in the Jena Experiment. This was done by clipping the vegetation at 3 cm above ground in up to four rectangles of 0.2 x 0.5 m per large plot. The location of these rectangles was assigned by random selection of new coordinates every year within the core area of the plots. The positions of the rectangles within plots were identical for all plots. The harvested biomass was sorted into categories: individual species for the sown plant species, weed plant species (species not sown at the particular plot), detached dead plant material (i.e., dead plant material in the data file), and remaining plant material that could not be assigned to any category (i.e., unidentified plant material in the data file). All biomass was dried to constant weight (70°C, >= 48 h) and weighed. Sown plant community biomass was calculated as the sum of the biomass of the individual sown species. The data for individual samples and the mean over samples for the biomass measures on the community level are given. Overall, analyses of the community biomass data have identified species richness as well as functional group composition as important drivers of a positive biodiversity-productivity relationship. The following series of datasets are contained in this collection: 1. Plant biomass form the Main Experiment: In the Main Experiment, 82 grassland plots of 20 x 20 m were established from a pool of 60 species belonging to four functional groups (grasses, legumes, tall and small herbs). In May 2002, varying numbers of plant species from this species pool were sown into the plots to create a gradient of plant species richness (1, 2, 4, 8, 16 and 60 species) and functional richness (1, 2, 3, 4 functional groups). 2. Plant biomass from the Dominance Experiment: In the Dominance Experiment, 206 grassland plots of 3.5 x 3.5 m were established from a pool of 9 species that can be dominant in semi-natural grassland communities of the study region. In May 2002, varying numbers of plant species from this species pool were sown into the plots to create a gradient of plant species richness (1, 2, 3, 4, 6, and 9 species). 3. Plant biomass from the monoculture plots: In the monoculture plots the sown plant community contains only a single species per plot and this species is a different one for each plot. Which species has been sown in which plot is stated in the plot information table for monocultures (see further details below). The monoculture plots of 3.5 x 3.5 m were established for all of the 60 plant species of the Jena Experiment species pool with two replicates per species like the other experiments in May 2002. All plots were maintained by bi-annual weeding and mowing.
Resumo:
This collection contains measurements of abundance and diversity of different groups of aboveground invertebrates sampled on the plots of the different sub-experiments at the field site of a large grassland biodiversity experiment (the Jena Experiment; see further details below). In the main experiment, 82 grassland plots of 20 x 20 m were established from a pool of 60 species belonging to four functional groups (grasses, legumes, tall and small herbs). In May 2002, varying numbers of plant species from this species pool were sown into the plots to create a gradient of plant species richness (1, 2, 4, 8, 16 and 60 species) and functional richness (1, 2, 3, 4 functional groups). Plots were maintained by bi-annual weeding and mowing. The following series of datasets are contained in this collection: 1. Measurements of ant abundance (number of individuals attracted to baits) and ant occurrence (binary data) in the Main Experiment in 2006 and 2013. Ants where sampled using two types of baited traps receiving ~10g of Tuna or ~10g of honey/Sucrose. After 30min the occurrence (presence = 1 / absence = 0) and abundance (number) of ants at the two types of baits was recorded and pooled per plot.
Resumo:
This collection contains measurements of vegetation and soil surface cover measured on the plots of the different sub-experiments at the field site of a large grassland biodiversity experiment (the Jena Experiment; see further details below). In the main experiment, 82 grassland plots of 20 x 20 m were established from a pool of 60 species belonging to four functional groups (grasses, legumes, tall and small herbs). In May 2002, varying numbers of plant species from this species pool were sown into the plots to create a gradient of plant species richness (1, 2, 4, 8, 16 and 60 species) and functional richness (1, 2, 3, 4 functional groups). Plots were maintained by bi-annual weeding and mowing. The following series of datasets are contained in this collection: 1. Measurements of vegetation cover, i.e. the proportion of soil surface area that is covered by different categories of plants per estimated plot area. Data was collected on the plant community level (sown plant community, weed plant community, dead plant material, and bare ground) and on the level of individual plant species in case of the species that have been sown into the plots to create the gradient of plant diversity.