866 resultados para Compositional data analysis-roots in geosciences
Resumo:
BACKGROUND Panic disorder is characterised by the presence of recurrent unexpected panic attacks, discrete periods of fear or anxiety that have a rapid onset and include symptoms such as racing heart, chest pain, sweating and shaking. Panic disorder is common in the general population, with a lifetime prevalence of 1% to 4%. A previous Cochrane meta-analysis suggested that psychological therapy (either alone or combined with pharmacotherapy) can be chosen as a first-line treatment for panic disorder with or without agoraphobia. However, it is not yet clear whether certain psychological therapies can be considered superior to others. In order to answer this question, in this review we performed a network meta-analysis (NMA), in which we compared eight different forms of psychological therapy and three forms of a control condition. OBJECTIVES To assess the comparative efficacy and acceptability of different psychological therapies and different control conditions for panic disorder, with or without agoraphobia, in adults. SEARCH METHODS We conducted the main searches in the CCDANCTR electronic databases (studies and references registers), all years to 16 March 2015. We conducted complementary searches in PubMed and trials registries. Supplementary searches included reference lists of included studies, citation indexes, personal communication to the authors of all included studies and grey literature searches in OpenSIGLE. We applied no restrictions on date, language or publication status. SELECTION CRITERIA We included all relevant randomised controlled trials (RCTs) focusing on adults with a formal diagnosis of panic disorder with or without agoraphobia. We considered the following psychological therapies: psychoeducation (PE), supportive psychotherapy (SP), physiological therapies (PT), behaviour therapy (BT), cognitive therapy (CT), cognitive behaviour therapy (CBT), third-wave CBT (3W) and psychodynamic therapies (PD). We included both individual and group formats. Therapies had to be administered face-to-face. The comparator interventions considered for this review were: no treatment (NT), wait list (WL) and attention/psychological placebo (APP). For this review we considered four short-term (ST) outcomes (ST-remission, ST-response, ST-dropouts, ST-improvement on a continuous scale) and one long-term (LT) outcome (LT-remission/response). DATA COLLECTION AND ANALYSIS As a first step, we conducted a systematic search of all relevant papers according to the inclusion criteria. For each outcome, we then constructed a treatment network in order to clarify the extent to which each type of therapy and each comparison had been investigated in the available literature. Then, for each available comparison, we conducted a random-effects meta-analysis. Subsequently, we performed a network meta-analysis in order to synthesise the available direct evidence with indirect evidence, and to obtain an overall effect size estimate for each possible pair of therapies in the network. Finally, we calculated a probabilistic ranking of the different psychological therapies and control conditions for each outcome. MAIN RESULTS We identified 1432 references; after screening, we included 60 studies in the final qualitative analyses. Among these, 54 (including 3021 patients) were also included in the quantitative analyses. With respect to the analyses for the first of our primary outcomes, (short-term remission), the most studied of the included psychological therapies was CBT (32 studies), followed by BT (12 studies), PT (10 studies), CT (three studies), SP (three studies) and PD (two studies).The quality of the evidence for the entire network was found to be low for all outcomes. The quality of the evidence for CBT vs NT, CBT vs SP and CBT vs PD was low to very low, depending on the outcome. The majority of the included studies were at unclear risk of bias with regard to the randomisation process. We found almost half of the included studies to be at high risk of attrition bias and detection bias. We also found selective outcome reporting bias to be present and we strongly suspected publication bias. Finally, we found almost half of the included studies to be at high risk of researcher allegiance bias.Overall the networks appeared to be well connected, but were generally underpowered to detect any important disagreement between direct and indirect evidence. The results showed the superiority of psychological therapies over the WL condition, although this finding was amplified by evident small study effects (SSE). The NMAs for ST-remission, ST-response and ST-improvement on a continuous scale showed well-replicated evidence in favour of CBT, as well as some sparse but relevant evidence in favour of PD and SP, over other therapies. In terms of ST-dropouts, PD and 3W showed better tolerability over other psychological therapies in the short term. In the long term, CBT and PD showed the highest level of remission/response, suggesting that the effects of these two treatments may be more stable with respect to other psychological therapies. However, all the mentioned differences among active treatments must be interpreted while taking into account that in most cases the effect sizes were small and/or results were imprecise. AUTHORS' CONCLUSIONS There is no high-quality, unequivocal evidence to support one psychological therapy over the others for the treatment of panic disorder with or without agoraphobia in adults. However, the results show that CBT - the most extensively studied among the included psychological therapies - was often superior to other therapies, although the effect size was small and the level of precision was often insufficient or clinically irrelevant. In the only two studies available that explored PD, this treatment showed promising results, although further research is needed in order to better explore the relative efficacy of PD with respect to CBT. Furthermore, PD appeared to be the best tolerated (in terms of ST-dropouts) among psychological treatments. Unexpectedly, we found some evidence in support of the possible viability of non-specific supportive psychotherapy for the treatment of panic disorder; however, the results concerning SP should be interpreted cautiously because of the sparsity of evidence regarding this treatment and, as in the case of PD, further research is needed to explore this issue. Behaviour therapy did not appear to be a valid alternative to CBT as a first-line treatment for patients with panic disorder with or without agoraphobia.
Resumo:
The discrete-time Markov chain is commonly used in describing changes of health states for chronic diseases in a longitudinal study. Statistical inferences on comparing treatment effects or on finding determinants of disease progression usually require estimation of transition probabilities. In many situations when the outcome data have some missing observations or the variable of interest (called a latent variable) can not be measured directly, the estimation of transition probabilities becomes more complicated. In the latter case, a surrogate variable that is easier to access and can gauge the characteristics of the latent one is usually used for data analysis. ^ This dissertation research proposes methods to analyze longitudinal data (1) that have categorical outcome with missing observations or (2) that use complete or incomplete surrogate observations to analyze the categorical latent outcome. For (1), different missing mechanisms were considered for empirical studies using methods that include EM algorithm, Monte Carlo EM and a procedure that is not a data augmentation method. For (2), the hidden Markov model with the forward-backward procedure was applied for parameter estimation. This method was also extended to cover the computation of standard errors. The proposed methods were demonstrated by the Schizophrenia example. The relevance of public health, the strength and limitations, and possible future research were also discussed. ^
Resumo:
Introduction. The HIV/AIDS disease burden disproportionately affects minority populations, specifically African Americans. While sexual risk behaviors play a role in the observed HIV burden, other factors including gender, age, socioeconomics, and barriers to healthcare access may also be contributory. The goal of this study was to determine how far down the HIV/AIDS disease process people of different ethnicities first present for healthcare. The study specifically analyzed the differences in CD4 cell counts at the initial HIV-1 diagnosis with respect to ethnicity. The study also analyzed racial differences in HIV/AIDS risk factors. ^ Methods. This is a retrospective study using data from the Adult Spectrum of HIV Disease (ASD), collected by the City of Houston Department of Health. The ASD database contains information on newly reported HIV cases in the Harris County District Hospitals between 1989 and 2000. Each patient had an initial and a follow-up report. The extracted variables of interest from the ASD data set were CD4 counts at the initial HIV diagnosis, race, gender, age at HIV diagnosis and behavioral risk factors. One-way ANOVA was used to examine differences in baseline CD4 counts at HIV diagnosis between racial/ethnic groups. Chi square was used to analyze racial differences in risk factors. ^ Results. The analyzed study sample was 4767. The study population was 47% Black, 37% White and 16% Hispanic [p<0.05]. The mean and median CD4 counts at diagnosis were 254 and 193 cells per ml, respectively. At the initial HIV diagnosis Blacks had the highest average CD4 counts (285), followed by Whites (233) and Hispanics (212) [p<0.001 ]. These statistical differences, however, were only observed with CD4 counts above 350 [p<0.001], even when adjusted for age at diagnosis and gender [p<0.05]. Looking at risk factors, Blacks were mostly affected by intravenous drug use (IVDU) and heterosexuality, whereas Whites and Hispanics were more affected by male homosexuality [ p<0.05]. ^ Conclusion. (1) There were statistical differences in CD4 counts with respect to ethnicity, but these differences only existed for CD4 counts above 350. These differences however do not appear to have clinical significance. Antithetically, Blacks had the highest CD4 counts followed by Whites and Hispanics. (2) 50% of this study group clinically had AIDS at their initial HIV diagnosis (median=193), irrespective of ethnicity. It was not clear from data analysis if these observations were due to failure of early HIV surveillance, HIV testing policies or healthcare access. More studies need to be done to address this question. (3) Homosexuality and bisexuality were the biggest risk factors for Whites and Hispanics, whereas for Blacks were mostly affected by heterosexuality and IVDU, implying a need for different public health intervention strategies for these racial groups. ^
Resumo:
The purpose of this comparative analysis of CHIP Perinatal policy (42 CFR § 457) was to provide a basis for understanding the variation in policy outputs across the twelve states that, as of June 2007, implemented the Unborn Child rule. This Department of Health and Human Services regulation expanded in 2002 the definition of “child” to include the period from conception to birth, allowing states to consider an unborn child a “targeted low-income child” and therefore eligible for SCHIP coverage. ^ Specific study aims were to (1) describe typologically the structural and contextual features of the twelve states that adopted a CHIP Perinatal policy; (2) describe and differentiate among the various designs of CHIP Perinatal policy implemented in the states; and (3) develop a conceptual model that links the structural and contextual features of the adopting states to differences in the forms the policy assumed, once it was implemented. ^ Secondary data were collected from publicly available information sources to describe characteristics of states’ political system, health system, economic system, sociodemographic context and implemented policy attributes. I posited that socio-demographic differences, political system differences and health system differences would directly account for the observed differences in policy output among the states. ^ Exploratory data analysis techniques, which included median polishing and multidimensional scaling, were employed to identify compelling patterns in the data. Scaled results across model components showed that economic system was most closely related to policy output, followed by health system. Political system and socio-demographic characteristics were shown to be weakly associated with policy output. Goodness-of-fit measures for MDS solutions implemented across states and model components, in one- and two-dimensions, were very good. ^ This comparative policy analysis of twelve states that adopted and implemented HHS Regulation 42 C.F.R. § 457 contributes to existing knowledge in three areas: CHIP Perinatal policy, public health policy and policy sciences. First, the framework allows for the identification of CHIP Perinatal program design possibilities and provides a basis for future studies that evaluate policy impact or performance. Second, studies of policy determinants are not well represented in the health policy literature. Thus, this study contributes to the development of the literature in public health policy. Finally, the conceptual framework for policy determinants developed in this study suggests new ways for policy makers and practitioners to frame policy arguments, encouraging policy change or reform. ^
Resumo:
As schools are pressured to perform on academics and standardized examinations, schools are reluctant to dedicate increased time to physical activity. After-school exercise and health programs may provide an opportunity to engage in more physical activity without taking time away from coursework during the day. The current study is a secondary data analysis of data from a randomized trial of a 10-week after-school program (six schools, n = 903) that implemented an exercise component based on the CATCH physical activity component and health modules based on the culturally-tailored Bienestar health education program. Outcome variables included BMI and aerobic capacity, health knowledge and healthy food intentions as assessed through path analysis techniques. Both the baseline model (χ2 (df = 8) = 16.90, p = .031; RMSEA = .035 (90% CI of .010–.058), NNFI = 0.983 and the CFI = 0.995) and the model incorporating intervention participation proved to be a good fit to the data (χ2 (df = 10) = 11.59, p = .314. RMSEA = .013 (90% CI of .010–.039); NNFI = 0.996 and CFI = 0.999). Experimental group participation was not predictive of changes in health knowledge, intentions to eat healthy foods or changes in Body Mass Index, but it was associated with increased aerobic capacity, β = .067, p < .05. School characteristics including SES and Language proficiency proved to be significantly associated with changes in knowledge and physical indicators. Further effects of school level variables on intervention outcomes are recommended so that tailored interventions can be developed aimed at the specific characteristics of each participating school. ^
Resumo:
Helicobacter pylori infection is frequently acquired during childhood. This microorganism is known to cause gastritis, and duodenal ulcer in pediatric patients, however most children remain completely asymptomatic to the infection. Currently there is no consensus in favor of treatment of H. pylori infection in asymptomatic children. The firstline of treatment for this population is triple medication therapy including two antibacterial agents and one proton pump inhibitor for a 2 week duration course. Decreased eradication rate of less than 75% has been documented with the use of this first-line therapy but novel tinidazole-containing quadruple sequential therapies seem worth investigating. None of the previous studies on such therapy has been done in the United States of America. As part of an iron deficiency anemia study in asymptomatic H. pylori infected children of El Paso, Texas, we conducted a secondary data analysis of study data collected in this trial to assess the effectiveness of this tinidazole-containing sequential quadruple therapy compared to placebo on clearing the infection. Subjects were selected from a group of asymptomatic children identified through household visits to 11,365 randomly selected dwelling units. After obtaining parental consent and child assent a total of 1,821 children 3-10 years of age were screened and 235 were positive to a novel urine immunoglobulin class G antibodies test for H. pylori infection and confirmed as infected using a 13C urea breath test, using a hydrolysis urea rate >10 μg/min as cut-off value. Out of those, 119 study subjects had a complete physical exam and baseline blood work and were randomly allocated to four groups, two of which received active H. pylori eradication medication alone or in combination with iron, while the other two received iron only or placebo only. Follow up visits to their houses were done to assess compliance and occurrence of adverse events and at 45+ days post-treatment, a second urea breath test was performed to assess their infection status. The effectiveness was primarily assessed on intent to treat basis (i.e., according to their treatment allocation), and the proportion of those who cleared their infection using a cut-off value >10 μg/min of for urea hydrolysis rate, was the primary outcome. Also we conducted analysis on a per-protocol basis and according to the cytotoxin associated gene A product of the H. pylori infection status. Also we compared the rate of adverse events across the two arms. On intent-to-treat and per-protocol analyses, 44.3% and 52.9%, respectively, of the children receiving the novel quadruple sequential eradication cleared their infection compared to 12.2% and 15.4% in the arms receiving iron or placebo only, respectively. Such differences were statistically significant (p<0.001). The study medications were well accepted and safe. In conclusion, we found in this study population, of mostly asymptomatically H. pylori infected children, living in the US along the border with Mexico, that the quadruple sequential eradication therapy cleared the infection in only half of the children receiving this treatment. Research is needed to assess the antimicrobial susceptibility of the strains of H. pylori infecting this population to formulate more effective therapies. ^
Resumo:
This dissertation develops and tests a comparative effectiveness methodology utilizing a novel approach to the application of Data Envelopment Analysis (DEA) in health studies. The concept of performance tiers (PerT) is introduced as terminology to express a relative risk class for individuals within a peer group and the PerT calculation is implemented with operations research (DEA) and spatial algorithms. The analysis results in the discrimination of the individual data observations into a relative risk classification by the DEA-PerT methodology. The performance of two distance measures, kNN (k-nearest neighbor) and Mahalanobis, was subsequently tested to classify new entrants into the appropriate tier. The methods were applied to subject data for the 14 year old cohort in the Project HeartBeat! study.^ The concepts presented herein represent a paradigm shift in the potential for public health applications to identify and respond to individual health status. The resultant classification scheme provides descriptive, and potentially prescriptive, guidance to assess and implement treatments and strategies to improve the delivery and performance of health systems. ^
Resumo:
The role of clinical chemistry has traditionally been to evaluate acutely ill or hospitalized patients. Traditional statistical methods have serious drawbacks in that they use univariate techniques. To demonstrate alternative methodology, a multivariate analysis of covariance model was developed and applied to the data from the Cooperative Study of Sickle Cell Disease.^ The purpose of developing the model for the laboratory data from the CSSCD was to evaluate the comparability of the results from the different clinics. Several variables were incorporated into the model in order to control for possible differences among the clinics that might confound any real laboratory differences.^ Differences for LDH, alkaline phosphatase and SGOT were identified which will necessitate adjustments by clinic whenever these data are used. In addition, aberrant clinic values for LDH, creatinine and BUN were also identified.^ The use of any statistical technique including multivariate analysis without thoughtful consideration may lead to spurious conclusions that may not be corrected for some time, if ever. However, the advantages of multivariate analysis far outweigh its potential problems. If its use increases as it should, the applicability to the analysis of laboratory data in prospective patient monitoring, quality control programs, and interpretation of data from cooperative studies could well have a major impact on the health and well being of a large number of individuals. ^
Resumo:
The first manuscript, entitled "Time-Series Analysis as Input for Clinical Predictive Modeling: Modeling Cardiac Arrest in a Pediatric ICU" lays out the theoretical background for the project. There are several core concepts presented in this paper. First, traditional multivariate models (where each variable is represented by only one value) provide single point-in-time snapshots of patient status: they are incapable of characterizing deterioration. Since deterioration is consistently identified as a precursor to cardiac arrests, we maintain that the traditional multivariate paradigm is insufficient for predicting arrests. We identify time series analysis as a method capable of characterizing deterioration in an objective, mathematical fashion, and describe how to build a general foundation for predictive modeling using time series analysis results as latent variables. Building a solid foundation for any given modeling task involves addressing a number of issues during the design phase. These include selecting the proper candidate features on which to base the model, and selecting the most appropriate tool to measure them. We also identified several unique design issues that are introduced when time series data elements are added to the set of candidate features. One such issue is in defining the duration and resolution of time series elements required to sufficiently characterize the time series phenomena being considered as candidate features for the predictive model. Once the duration and resolution are established, there must also be explicit mathematical or statistical operations that produce the time series analysis result to be used as a latent candidate feature. In synthesizing the comprehensive framework for building a predictive model based on time series data elements, we identified at least four classes of data that can be used in the model design. The first two classes are shared with traditional multivariate models: multivariate data and clinical latent features. Multivariate data is represented by the standard one value per variable paradigm and is widely employed in a host of clinical models and tools. These are often represented by a number present in a given cell of a table. Clinical latent features derived, rather than directly measured, data elements that more accurately represent a particular clinical phenomenon than any of the directly measured data elements in isolation. The second two classes are unique to the time series data elements. The first of these is the raw data elements. These are represented by multiple values per variable, and constitute the measured observations that are typically available to end users when they review time series data. These are often represented as dots on a graph. The final class of data results from performing time series analysis. This class of data represents the fundamental concept on which our hypothesis is based. The specific statistical or mathematical operations are up to the modeler to determine, but we generally recommend that a variety of analyses be performed in order to maximize the likelihood that a representation of the time series data elements is produced that is able to distinguish between two or more classes of outcomes. The second manuscript, entitled "Building Clinical Prediction Models Using Time Series Data: Modeling Cardiac Arrest in a Pediatric ICU" provides a detailed description, start to finish, of the methods required to prepare the data, build, and validate a predictive model that uses the time series data elements determined in the first paper. One of the fundamental tenets of the second paper is that manual implementations of time series based models are unfeasible due to the relatively large number of data elements and the complexity of preprocessing that must occur before data can be presented to the model. Each of the seventeen steps is analyzed from the perspective of how it may be automated, when necessary. We identify the general objectives and available strategies of each of the steps, and we present our rationale for choosing a specific strategy for each step in the case of predicting cardiac arrest in a pediatric intensive care unit. Another issue brought to light by the second paper is that the individual steps required to use time series data for predictive modeling are more numerous and more complex than those used for modeling with traditional multivariate data. Even after complexities attributable to the design phase (addressed in our first paper) have been accounted for, the management and manipulation of the time series elements (the preprocessing steps in particular) are issues that are not present in a traditional multivariate modeling paradigm. In our methods, we present the issues that arise from the time series data elements: defining a reference time; imputing and reducing time series data in order to conform to a predefined structure that was specified during the design phase; and normalizing variable families rather than individual variable instances. The final manuscript, entitled: "Using Time-Series Analysis to Predict Cardiac Arrest in a Pediatric Intensive Care Unit" presents the results that were obtained by applying the theoretical construct and its associated methods (detailed in the first two papers) to the case of cardiac arrest prediction in a pediatric intensive care unit. Our results showed that utilizing the trend analysis from the time series data elements reduced the number of classification errors by 73%. The area under the Receiver Operating Characteristic curve increased from a baseline of 87% to 98% by including the trend analysis. In addition to the performance measures, we were also able to demonstrate that adding raw time series data elements without their associated trend analyses improved classification accuracy as compared to the baseline multivariate model, but diminished classification accuracy as compared to when just the trend analysis features were added (ie, without adding the raw time series data elements). We believe this phenomenon was largely attributable to overfitting, which is known to increase as the ratio of candidate features to class examples rises. Furthermore, although we employed several feature reduction strategies to counteract the overfitting problem, they failed to improve the performance beyond that which was achieved by exclusion of the raw time series elements. Finally, our data demonstrated that pulse oximetry and systolic blood pressure readings tend to start diminishing about 10-20 minutes before an arrest, whereas heart rates tend to diminish rapidly less than 5 minutes before an arrest.
New methods for quantification and analysis of quantitative real-time polymerase chain reaction data
Resumo:
Quantitative real-time polymerase chain reaction (qPCR) is a sensitive gene quantitation method that has been widely used in the biological and biomedical fields. The currently used methods for PCR data analysis, including the threshold cycle (CT) method, linear and non-linear model fitting methods, all require subtracting background fluorescence. However, the removal of background fluorescence is usually inaccurate, and therefore can distort results. Here, we propose a new method, the taking-difference linear regression method, to overcome this limitation. Briefly, for each two consecutive PCR cycles, we subtracted the fluorescence in the former cycle from that in the later cycle, transforming the n cycle raw data into n-1 cycle data. Then linear regression was applied to the natural logarithm of the transformed data. Finally, amplification efficiencies and the initial DNA molecular numbers were calculated for each PCR run. To evaluate this new method, we compared it in terms of accuracy and precision with the original linear regression method with three background corrections, being the mean of cycles 1-3, the mean of cycles 3-7, and the minimum. Three criteria, including threshold identification, max R2, and max slope, were employed to search for target data points. Considering that PCR data are time series data, we also applied linear mixed models. Collectively, when the threshold identification criterion was applied and when the linear mixed model was adopted, the taking-difference linear regression method was superior as it gave an accurate estimation of initial DNA amount and a reasonable estimation of PCR amplification efficiencies. When the criteria of max R2 and max slope were used, the original linear regression method gave an accurate estimation of initial DNA amount. Overall, the taking-difference linear regression method avoids the error in subtracting an unknown background and thus it is theoretically more accurate and reliable. This method is easy to perform and the taking-difference strategy can be extended to all current methods for qPCR data analysis.^
Resumo:
This data set contains soil carbon measurements (Organic carbon, inorganic carbon, and total carbon; all measured in dried soil samples) from the main experiment plots of a large grassland biodiversity experiment (the Jena Experiment; see further details below). In the main experiment, 82 grassland plots of 20 x 20 m were established from a pool of 60 species belonging to four functional groups (grasses, legumes, tall and small herbs). In May 2002, varying numbers of plant species from this species pool were sown into the plots to create a gradient of plant species richness (1, 2, 4, 8, 16 and 60 species) and functional richness (1, 2, 3, 4 functional groups). Plots were maintained by bi-annual weeding and mowing. Soil sampling and analysis: Stratified soil sampling was performed in April 2006 to a depth of 30 cm. Three samples per plot were taken using a split tube sampler with an inner diameter of 4.8 cm (Eijkelkamp Agrisearch Equipment, Giesbeek, the Netherlands). Sampling locations were less than 30 cm apart from sampling locations in 2002. Soil samples were segmented into 5 cm depth segments in the field (resulting in six depth layers) and made into composite samples per depth. Subsequently, samples were dried at 40°C. All soil samples were passed through a sieve with a mesh size of 2 mm. Because of much higher proportions of roots in the soil, samples in years after 2002 were further sieved to 1 mm according to common root removal methods. No additional mineral particles were removed by this procedure. Total carbon concentration was analyzed on ball-milled subsamples (time 4 min, frequency 30 s**-1) by an elemental analyzer at 1150°C (Elementaranalysator vario Max CN; Elementar Analysensysteme GmbH, Hanau, Germany). We measured inorganic carbon concentration by elemental analysis at 1150°C after removal of organic carbon for 16 h at 450°C in a muffle furnace. Organic carbon concentration was calculated as the difference between both measurements of total and inorganic carbon.
Total nitrogen from solid phase in the Jena Experiment (Main Experiment up to 30cm depth, year 2008)
Resumo:
This data set contains measurements of total nitrogen from the main experiment plots of a large grassland biodiversity experiment (the Jena Experiment; see further details below). In the main experiment, 82 grassland plots of 20 x 20 m were established from a pool of 60 species belonging to four functional groups (grasses, legumes, tall and small herbs). In May 2002, varying numbers of plant species from this species pool were sown into the plots to create a gradient of plant species richness (1, 2, 4, 8, 16 and 60 species) and functional richness (1, 2, 3, 4 functional groups). Plots were maintained by bi-annual weeding and mowing. Soil sampling and analysis: Stratified soil sampling was performed in April 2008 to a depth of 30 cm. Three independent samples per plot were taken using a split tube sampler with an inner diameter of 4.8 cm (Eijkelkamp Agrisearch Equipment, Giesbeek, the Netherlands). Soil samples were segmented to a depth resolution of 5 cm in the field, giving six depth subsamples per core, and made into composite samples per depth. Sampling locations were less than 30 cm apart from sampling locations in other years. Samples were dried at 40°C. All soil samples were passed through a sieve with a mesh size of 2 mm. Because of much higher proportions of roots in the soil, the samples were further sieved to 1 mm according to common root removal methods. No additional mineral particles were removed by this procedure. Total nitrogen concentration was analyzed on ball-milled subsamples (time 4 min, frequency 30 s-1) by an elemental analyzer at 1150°C (Elementaranalysator vario Max CN; Elementar Analysensysteme GmbH, Hanau, Germany).
Total nitrogen from solid phase in the Jena Experiment (Main Experiment up to 30cm depth, year 2004)
Resumo:
This data set contains measurements of total nitrogen from the main experiment plots of a large grassland biodiversity experiment (the Jena Experiment; see further details below). In the main experiment, 82 grassland plots of 20 x 20 m were established from a pool of 60 species belonging to four functional groups (grasses, legumes, tall and small herbs). In May 2002, varying numbers of plant species from this species pool were sown into the plots to create a gradient of plant species richness (1, 2, 4, 8, 16 and 60 species) and functional richness (1, 2, 3, 4 functional groups). Plots were maintained by bi-annual weeding and mowing. Soil sampling and analysis: Stratified soil sampling was performed in April 2004 to a depth of 30 cm. Three independent samples per plot were taken using a split tube sampler with an inner diameter of 4.8 cm (Eijkelkamp Agrisearch Equipment, Giesbeek, the Netherlands). Soil samples were segmented to a depth resolution of 5 cm in the field, giving six depth subsamples per core, and made into composite samples per depth. Sampling locations were less than 30 cm apart from sampling locations in other years. Samples were dried at 40°C. All soil samples were passed through a sieve with a mesh size of 2 mm. Because of much higher proportions of roots in the soil, the samples were further sieved to 1 mm according to common root removal methods. No additional mineral particles were removed by this procedure. Total nitrogen concentration was analyzed on ball-milled subsamples (time 4 min, frequency 30 s-1) by an elemental analyzer at 1150°C (Elementaranalysator vario Max CN; Elementar Analysensysteme GmbH, Hanau, Germany).
Resumo:
Recent works (Evelpidou et al., 2012) suggest that the modern tidal notch is disappearing worldwide due sea level rise over the last century. In order to assess this hypothesis, we measured modern tidal notches in several of sites along the Mediterranean coasts. We report observations on tidal notches cut along carbonate coasts from 73 sites from Italy, France, Croatia, Montenegro, Greece, Malta and Spain, plus additional observations carried outside the Mediterranean. At each site, we measured notch width and depth, and we described the characteristics of the biological rim at the base of the notch. We correlated these parameters with wave energy, tide gauge datasets and rock lithology. Our results suggest that, considering 'the development of tidal notches the consequence of midlittoral bioerosion' (as done in Evelpidou et al., 2012) is a simplification that can lead to misleading results, such as stating that notches are disappearing. Important roles in notch formation can be also played by wave action, rate of karst dissolution, salt weathering and wetting and drying cycles. Of course notch formation can be augmented and favoured also by bioerosion which can, in particular cases, be the main process of notch formation and development. Our dataset shows that notches are carved by an ensemble rather than by a single process, both today and in the past, and that it is difficult, if not impossible, to disentangle them and establish which one is prevailing. We therefore show that tidal notches are still forming, challenging the hypothesis that sea level rise has drowned them.
Resumo:
This data set contains soil carbon measurements (Organic carbon, inorganic carbon, and total carbon; all measured in dried soil samples) from the main experiment plots of a large grassland biodiversity experiment (the Jena Experiment; see further details below). In the main experiment, 82 grassland plots of 20 x 20 m were established from a pool of 60 species belonging to four functional groups (grasses, legumes, tall and small herbs). In May 2002, varying numbers of plant species from this species pool were sown into the plots to create a gradient of plant species richness (1, 2, 4, 8, 16 and 60 species) and functional richness (1, 2, 3, 4 functional groups). Plots were maintained by bi-annual weeding and mowing. Soil sampling and analysis: Stratified soil sampling was performed in April 2008 to a depth of 30 cm. Three samples per plot were taken using a split tube sampler with an inner diameter of 4.8 cm (Eijkelkamp Agrisearch Equipment, Giesbeek, the Netherlands). Sampling locations were less than 30 cm apart from sampling locations in 2002. Soil samples were segmented into 5 cm depth segments in the field (resulting in six depth layers) and made into composite samples per depth. Subsequently, samples were dried at 40°C. All soil samples were passed through a sieve with a mesh size of 2 mm. Because of much higher proportions of roots in the soil, samples in years after 2002 were further sieved to 1 mm according to common root removal methods. No additional mineral particles were removed by this procedure. Total carbon concentration was analyzed on ball-milled subsamples (time 4 min, frequency 30 s**-1) by an elemental analyzer at 1150°C (Elementaranalysator vario Max CN; Elementar Analysensysteme GmbH, Hanau, Germany). We measured inorganic carbon concentration by elemental analysis at 1150°C after removal of organic carbon for 16 h at 450°C in a muffle furnace. Organic carbon concentration was calculated as the difference between both measurements of total and inorganic carbon.