47 resultados para high-use area
Resumo:
Intensification of land use in semi-natural hay meadows has resulted in a decrease in species diversity. This is often thought to be caused by the reduced establishment of plant species due to high competition for light under conditions of increased productivity. Sowing experiments in grasslands have found reliable evidence that diversity can also be constrained by seed availability, implying that processes influencing the production and persistence of seeds may be important for the functioning of ecosystems. So far, the effects of land-use intensification on the seed rain and the persistence of seeds in the soil have been unclear. We selected six pairs of extensively managed (Festuco-Brometea) and intensively managed (Arrhenatheretalia) grassland with traditional late cutting regimes across Switzerland and covering an annual productivity gradient in the range 176–1211 gm−2. In each grassland community, we estimated seed rain and seed bank using eight pooled seed-trap or topsoil samples of 89 cm2 in each of six plots representing an area of c. 150 m2. The seed traps were established in spring 2010 and collected simultaneously with soil cores after an exposure of c. three months. We applied the emergence method in a cold frame over eight months to estimate density of viable seeds. With community productivity reflecting land-use intensification, the density and species richness in the seed rain increased, while mean seed size diminished and the proportions of persistent seeds and of species with persistent seeds in the topsoil declined. Stronger limitation of seeds in extensively managed semi-natural grasslands can explain the fact that such grasslands are not always richer in species than more intensively managed ones.
Resumo:
Adding to the on-going debate regarding vegetation recolonisation (more particularly the timing) in Europe and climate change since the Lateglacial, this study investigates a long sediment core (LL081) from Lake Ledro (652ma.s.l., southern Alps, Italy). Environmental changes were reconstructed using multiproxy analysis (pollen-based vegetation and climate reconstruction, lake levels, magnetic susceptibility and X-ray fluorescence (XRF) measurements) recorded climate and land-use changes during the Lateglacial and early-middle Holocene. The well-dated and high-resolution pollen record of Lake Ledro is compared with vegetation records from the southern and northern Alps to trace the history of tree species distribution. An altitudedependent progressive time delay of the first continuous occurrence of Abies (fir) and of the Larix (larch) development has been observed since the Lateglacial in the southern Alps. This pattern suggests that the mid-altitude Lake Ledro area was not a refuge and that trees originated from lowlands or hilly areas (e.g. Euganean Hills) in northern Italy. Preboreal oscillations (ca. 11 000 cal BP), Boreal oscillations (ca. 10 200, 9300 cal BP) and the 8.2 kyr cold event suggest a centennial-scale climate forcing in the studied area. Picea (spruce) expansion occurred preferentially around 10 200 and 8200 cal BP in the south-eastern Alps, and therefore reflects the long-lasting cumulative effects of successive boreal and the 8.2 kyr cold event. The extension of Abies is contemporaneous with the 8.2 kyr event, but its development in the southern Alps benefits from the wettest interval 8200-7300 cal BP evidenced in high lake levels, flood activity and pollen-based climate reconstructions. Since ca. 7500 cal BP, a weak signal of pollen-based anthropogenic activities suggest weak human impact. The period between ca. 5700 and ca. 4100 cal BP is considered as a transition period to colder and wetter conditions (particularly during summers) that favoured a dense beech (Fagus) forest development which in return caused a distinctive yew (Taxus) decline.We conclude that climate was the dominant factor controlling vegetation changes and erosion processes during the early and middle Holocene (up to ca. 4100 cal BP).
Resumo:
Intensification of land use in semi-natural hay meadows has resulted in a decrease in species diversity. This is often thought to be caused by the reduced establishment of plant species due to high competition for light under conditions of increased productivity. Sowing experiments in grasslands have found reliable evidence that diversity can also be constrained by seed availability, implying that processes influencing the production and persistence of seeds may be important for the functioning of ecosystems. So far, the effects of land-use intensification on the seed rain and the persistence of seeds in the soil have been unclear. We selected six pairs of extensively managed (Festuco-Brometea) and intensively managed (Arrhenatheretalia) grassland with traditional late cutting regimes across Switzerland and covering an annual productivity gradient in the range 176-1211 gm(-2). In each grassland community, we estimated seed rain and seed bank using eight pooled seed-trap or topsoil samples of 89 cm(2) in each of six plots representing an area of c. 150 m(2). The seed traps were established in spring 2010 and collected simultaneously with soil cores after an exposure of c. three months. We applied the emergence method in a cold frame over eight months to estimate density of viable seeds. With community productivity reflecting land-use intensification, the density and species richness in the seed rain increased, while mean seed size diminished and the proportions of persistent seeds and of species with persistent seeds in the topsoil declined. Stronger limitation of seeds in extensively managed semi-natural grasslands can explain the fact that such grasslands are not always richer in species than more intensively managed ones. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
Complementarity that leads to more efficient resource use is presumed to be a key mechanism explaining positive biodiversity–productivity relationships but has been described solely for experimental set-ups with controlled environmental settings or for very short gradients of abiotic conditions, land-use intensity and biodiversity. Therefore, we analysed plant diversity effects on nitrogen dynamics across a broad range of Central European grasslands. The 15N natural abundance in soil and plant biomass reflects the net effect of processes affecting ecosystem N dynamics. This includes the mechanism of complementary resource utilization that causes a decrease in the 15N isotopic signal. We measured plant species richness, natural abundance of 15N in soil and plants, above-ground biomass of the community and three single species (an herb, grass and legume) and a variety of additional environmental variables in 150 grassland plots in three regions of Germany. To explore the drivers of the nitrogen dynamics, we performed several analyses of covariance treating the 15N isotopic signals as a function of plant diversity and a large set of covariates. Increasing plant diversity was consistently linked to decreased δ15N isotopic signals in soil, above-ground community biomass and the three single species. Even after accounting for multiple covariates, plant diversity remained the strongest predictor of δ15N isotopic signals suggesting that higher plant diversity leads to a more closed nitrogen cycle due to more efficient nitrogen use. Factors linked to increased δ15N values included the amount of nitrogen taken up, soil moisture and land-use intensity (particularly fertilization), all indicators of the openness of the nitrogen cycle due to enhanced N-turnover and subsequent losses. Study region was significantly related to the δ15N isotopic signals indicating that regional peculiarities such as former intensive land use could strongly affect nitrogen dynamics. Synthesis. Our results provide strong evidence that the mechanism of complementary resource utilization operates in real-world grasslands where multiple external factors affect nitrogen dynamics. Although single species may differ in effect size, actively increasing total plant diversity in grasslands could be an option to more effectively use nitrogen resources and to reduce the negative environmental impacts of nitrogen losses.
Resumo:
RATIONALE In biomedical journals authors sometimes use the standard error of the mean (SEM) for data description, which has been called inappropriate or incorrect. OBJECTIVE To assess the frequency of incorrect use of SEM in articles in three selected cardiovascular journals. METHODS AND RESULTS All original journal articles published in 2012 in Cardiovascular Research, Circulation: Heart Failure and Circulation Research were assessed by two assessors for inappropriate use of SEM when providing descriptive information of empirical data. We also assessed whether the authors state in the methods section that the SEM will be used for data description. Of 441 articles included in this survey, 64% (282 articles) contained at least one instance of incorrect use of the SEM, with two journals having a prevalence above 70% and "Circulation: Heart Failure" having the lowest value (27%). In 81% of articles with incorrect use of SEM, the authors had explicitly stated that they use the SEM for data description and in 89% SEM bars were also used instead of 95% confidence intervals. Basic science studies had a 7.4-fold higher level of inappropriate SEM use (74%) than clinical studies (10%). LIMITATIONS The selection of the three cardiovascular journals was based on a subjective initial impression of observing inappropriate SEM use. The observed results are not representative for all cardiovascular journals. CONCLUSION In three selected cardiovascular journals we found a high level of inappropriate SEM use and explicit methods statements to use it for data description, especially in basic science studies. To improve on this situation, these and other journals should provide clear instructions to authors on how to report descriptive information of empirical data.
Resumo:
BACKGROUND High-risk prostate cancer (PCa) is an extremely heterogeneous disease. A clear definition of prognostic subgroups is mandatory. OBJECTIVE To develop a pretreatment prognostic model for PCa-specific survival (PCSS) in high-risk PCa based on combinations of unfavorable risk factors. DESIGN, SETTING, AND PARTICIPANTS We conducted a retrospective multicenter cohort study including 1360 consecutive patients with high-risk PCa treated at eight European high-volume centers. INTERVENTION Retropubic radical prostatectomy with pelvic lymphadenectomy. OUTCOME MEASUREMENTS AND STATISTICAL ANALYSIS Two Cox multivariable regression models were constructed to predict PCSS as a function of dichotomization of clinical stage (< cT3 vs cT3-4), Gleason score (GS) (2-7 vs 8-10), and prostate-specific antigen (PSA; ≤ 20 ng/ml vs > 20 ng/ml). The first "extended" model includes all seven possible combinations; the second "simplified" model includes three subgroups: a good prognosis subgroup (one single high-risk factor); an intermediate prognosis subgroup (PSA >20 ng/ml and stage cT3-4); and a poor prognosis subgroup (GS 8-10 in combination with at least one other high-risk factor). The predictive accuracy of the models was summarized and compared. Survival estimates and clinical and pathologic outcomes were compared between the three subgroups. RESULTS AND LIMITATIONS The simplified model yielded an R(2) of 33% with a 5-yr area under the curve (AUC) of 0.70 with no significant loss of predictive accuracy compared with the extended model (R(2): 34%; AUC: 0.71). The 5- and 10-yr PCSS rates were 98.7% and 95.4%, 96.5% and 88.3%, 88.8% and 79.7%, for the good, intermediate, and poor prognosis subgroups, respectively (p = 0.0003). Overall survival, clinical progression-free survival, and histopathologic outcomes significantly worsened in a stepwise fashion from the good to the poor prognosis subgroups. Limitations of the study are the retrospective design and the long study period. CONCLUSIONS This study presents an intuitive and easy-to-use stratification of high-risk PCa into three prognostic subgroups. The model is useful for counseling and decision making in the pretreatment setting.
Resumo:
PURPOSE Rapid assessment and intervention is important for the prognosis of acutely ill patients admitted to the emergency department (ED). The aim of this study was to prospectively develop and validate a model predicting the risk of in-hospital death based on all available information available at the time of ED admission and to compare its discriminative performance with a non-systematic risk estimate by the triaging first health-care provider. METHODS Prospective cohort analysis based on a multivariable logistic regression for the probability of death. RESULTS A total of 8,607 consecutive admissions of 7,680 patients admitted to the ED of a tertiary care hospital were analysed. Most frequent APACHE II diagnostic categories at the time of admission were neurological (2,052, 24 %), trauma (1,522, 18 %), infection categories [1,328, 15 %; including sepsis (357, 4.1 %), severe sepsis (249, 2.9 %), septic shock (27, 0.3 %)], cardiovascular (1,022, 12 %), gastrointestinal (848, 10 %) and respiratory (449, 5 %). The predictors of the final model were age, prolonged capillary refill time, blood pressure, mechanical ventilation, oxygen saturation index, Glasgow coma score and APACHE II diagnostic category. The model showed good discriminative ability, with an area under the receiver operating characteristic curve of 0.92 and good internal validity. The model performed significantly better than non-systematic triaging of the patient. CONCLUSIONS The use of the prediction model can facilitate the identification of ED patients with higher mortality risk. The model performs better than a non-systematic assessment and may facilitate more rapid identification and commencement of treatment of patients at risk of an unfavourable outcome.
Resumo:
Off-site effects of soil erosion are becoming increasingly important, particularly the pollution of surface waters. In order to develop environmentally efficient and cost effective mitigation options it is essential to identify areas that bear both a high erosion risk and high connectivity to surface waters. This paper introduces a simple risk assessment tool that allows the delineation of potential critical source areas (CSA) of sediment input into surface waters concerning the agricultural areas of Switzerland. The basis are the erosion risk map with a 2 m resolution (ERM2) and the drainage network, which is extended by drained roads, farm tracks, and slope depressions. The probability of hydrological and sedimentological connectivity is assessed by combining soil erosion risk and extended drainage network with flow distance calculation. A GIS-environment with multiple-flow accumulation algorithms is used for routing runoff generation and flow pathways. The result is a high resolution connectivity map of the agricultural area of Switzerland (888,050 ha). Fifty-five percent of the computed agricultural area is potentially connected with surface waters, 45% is not connected. Surprisingly, the larger part of 34% (62% of the connected area) is indirectly connected with surface waters through drained roads, and only 21% are directly connected. The reason is the topographic complexity and patchiness of the landscape due to a dense road and drainage network. A total of 24% of the connected area and 13% of the computed agricultural area, respectively, are rated with a high connectivity probability. On these CSA an adapted land use is recommended, supported by vegetated buffer strips preventing sediment load. Even areas that are far away from open water bodies can be indirectly connected and need to be included in planning of mitigation measures. Thus, the connectivity map presented is an important decision-making tool for policy-makers and extension services. The map is published on the web and thus available for application.
Resumo:
A representative study among e-bike owners and tenants in Switzerland (n = 1652) provides a deeper understanding of e-bike users characteristics, motives, values, usage behavior, and barriers to the use. In a micro simulation the implications of the findings for the energy demand and CO2 emissions are estimated.
Resumo:
BACKGROUND High-dose benzodiazepine (BZD) dependence is associated with a wide variety of negative health consequences. Affected individuals are reported to suffer from severe mental disorders and are often unable to achieve long-term abstinence via recommended discontinuation strategies. Although it is increasingly understood that treatment interventions should take subjective experiences and beliefs into account, the perceptions of this group of individuals remain under-investigated. METHODS We conducted an exploratory qualitative study with 41 adult subjects meeting criteria for (high-dose) BZD-dependence, as defined by ICD-10. One-on-one in-depth interviews allowed for an exploration of this group's views on the reasons behind their initial and then continued use of BZDs, as well as their procurement strategies. Mayring's qualitative content analysis was used to evaluate our data. RESULTS In this sample, all participants had developed explanatory models for why they began using BZDs. We identified a multitude of reasons that we grouped into four broad categories, as explaining continued BZD use: (1) to cope with symptoms of psychological distress or mental disorder other than substance use, (2) to manage symptoms of physical or psychological discomfort associated with somatic disorder, (3) to alleviate symptoms of substance-related disorders, and (4) for recreational purposes, that is, sensation-seeking and other social reasons. Subjects often considered BZDs less dangerous than other substances and associated their use more often with harm reduction than as recreational. Specific obtainment strategies varied widely: the majority of participants oscillated between legal and illegal methods, often relying on the black market when faced with treatment termination. CONCLUSIONS Irrespective of comorbidity, participants expressed a clear preference for medically related explanatory models for their BZD use. We therefore suggest that clinicians consider patients' motives for long-term, high-dose BZD use when formulating treatment plans for this patient group, especially since it is known that individuals are more compliant with approaches they perceive to be manageable, tolerable, and effective.
Resumo:
BACKGROUND AND AIMS Smoking is a crucial environmental factor in inflammatory bowel disease [IBD]. However, knowledge on patient characteristics associated with smoking, time trends of smoking rates, gender differences and supportive measures to cease smoking provided by physicians is scarce. We aimed to address these questions in Swiss IBD patients. METHODS Prospectively obtained data from patients participating in the Swiss IBD Cohort Study was analysed and compared with the general Swiss population [GSP] matched by age, sex and year. RESULTS Among a total of 1770 IBD patients analysed [49.1% male], 29% are current smokers. More than twice as many patients with Crohn's disease [CD] are active smokers compared with ulcerative colitis [UC] [UC, 39.6% vs CD 15.3%, p < 0.001]. In striking contrast to the GSP, significantly more women than men with CD smoke [42.8% vs 35.8%, p = 0.025], with also an overall significantly increased smoking rate compared with the GSP in women but not men. The vast majority of smoking IBD patients [90.5%] claim to never have received any support to achieve smoking cessation, significantly more in UC compared with CD. We identify a significantly negative association of smoking and primary sclerosing cholangitis, indicative of a protective effect. Psychological distress in CD is significantly higher in smokers compared with non-smokers, but does not differ in UC. CONCLUSIONS Despite well-established detrimental effects, smoking rates in CD are alarmingly high with persistent and stagnating elevations compared with the GSP, especially in female patients. Importantly, there appears to be an unacceptable underuse of supportive measures to achieve smoking cessation.