848 resultados para Native Vegetation Condition, Benchmarking, Bayesian Decision Framework, Regression, Indicators
Resumo:
According to prevailing ecological theory one would expect the most stable vegetation on sites which are least disturbed (Odum 1971). According to theory one would also expect the most diversity of species on undisturbed sites (Odum 1971). This stable and diverse community would be produced over a period of many years through a process of plant succession where annual herbs are replaced by perennial herbs and finally woody plants would come to dominate and perpetuate the community. Another ecological theory holds that the complexity (structure and species diversity) of a plant community is dependent upon the amount of disturbance to which it is subjected (Woodwell, 1970). According to this theory the normal succession of a plant community through its various stages may be arrested at some point depending upon the nature and severity of the disturbance. In applying these theories to roadside vegetation it becomes apparent that mass herbicide spraying and extensive mowing of roadsides has produced a relatively simple and unstable vegetation. It follows that if disturbances were reduced not only would the roadside plant community increase in stability but maintenance costs and energy usage would be reduced. In this study we have investigated several aspects of reduced disturbances on roadside vegetation. Research has centered on the effectiveness of spot spraying techniques on noxious weed control, establishment of native grass cover where ditch cleaning and other disturbance has left the bare soil exposed and the response of roadside vegetation when released from annual mass spraying.
Resumo:
Intensive agriculture, in which detrimental farming practices lessen food abundance and/or reduce food accessibility for many animal species, has led to a widespread collapse of farmland biodiversity. Vineyards in central and southern Europe are intensively cultivated; though they may still harbour several rare plant and animal species, they remain little studied. Over the past decades, there has been a considerable reduction in the application of insecticides in wine production, with a progressive shift to biological control (integrated production) and, to a lesser extent, organic production. Spraying of herbicides has also diminished, which has led to more vegetation cover on the ground, although most vineyards remain bare, especially in southern Europe. The effects of these potentially positive environmental trends upon biodiversity remain mostly unknown as regards vertebrates. The Woodlark (Lullula arborea) is an endangered, short-distance migratory bird that forages and breeds on the ground. In southern Switzerland (Valais), it occurs mostly in vineyards. We used radiotracking and mixed effects logistic regression models to assess Woodlark response to modern vineyard farming practices, study factors driving foraging micro-habitat selection, and determine optimal habitat profile to inform management. The presence of ground vegetation cover was the main factor dictating the selection of foraging locations, with an optimum around 55% at the foraging patch scale. These conditions are met in integrated production vineyards, but only when grass is tolerated on part of the ground surface, which is the case on ca. 5% of the total Valais vineyard area. In contrast, conventionally managed vineyards covering a parts per thousand yen95% of the vineyard area are too bare because of systematic application of herbicides all over the ground, whilst the rare organic vineyards usually have a too-dense sward. The optimal mosaic with ca. 50% ground vegetation cover is currently achieved in integrated production vineyards where herbicide is applied every second row. In organic production, ca. 50% ground vegetation cover should be promoted, which requires regular mechanical removal of ground vegetation. These measures are likely to benefit general biodiversity in vineyards.
Resumo:
Inadequate usage can degrade natural resources, particularly soils. More attention has been paid to practices aiming at the recovery of degraded soils in the last years, e.g, the use of organic fertilizers, liming and introduction of species adapted to adverse conditions. The purpose of this study was therefore to investigate the recovery of physical properties of a Red Latosol (Oxisol) degraded by the construction of a hydroelectric power station. In the study area, a soil layer about 8m thick had been withdrawn by heavy machines leading not only to soil compaction, but resulting in high-degree degradation. The experiment was arranged in a completely randomized design with nine treatments and four replications. The treatments consisted of: 1- soil mobilization by tilling (to ensure the effect of mechanical mobilization in all treatments) without planting, but growth of spontaneous vegetation; 2- Black velvet bean (Stizolobium aterrimum Piper & Tracy); 3- Pigeonpea (Cajanus cajan (L.) DC); 4- Liming + black velvet bean; 5-Liming + pigeonpea until 1994, when replaced by jack bean (Canavalia ensiformis); 6- Liming + gypsum + black velvet bean; 7- Liming + gypsum + pigeonpea until 1994, when replaced by jack bean; and two controls as reference: 8- Native Cerrado vegetation and 9- bare soil (no tilling and no planting), left under natural conditions and in this situation, without spontaneous vegetation. In treatments 1 through 7, the soil was tilled. Treatments were installed in 1992 and left unmanaged for seven years, until brachiaria (Brachiaria decumbens) was planted in all plots in 1999. Seventeen years after implantation, the properties soil macroporosity, microporosity, total porosity, bulk density and aggregate stability were assessed in the previously described treatments in the soil layers 0.00-0.10; 0.10-0.20 and 0.20-0.40 m, and soil Penetration Resistance and soil moisture in 0.00-0.15 and 0.15-0.30 m. The plants were evaluated for: brachiaria dry matter and spontaneous growth of native tree species in the plots as of 2006. Results were analyzed by variance analysis and Tukey´s test at 5 % for mean comparison. In all treatments, except for the bare soil (no recovery measures), ongoing recovery of the degraded soil physical properties was observed. Macroporosity, soil bulk density and total porosity were good soil quality indicators. The occurrence of spontaneous native species indicated the soil recovery process. The best adapted species was Machaerium acutifolium Vogel, with the largest number of plants and most advanced development; the dry matter production of B. decumbens in recovering soil was similar to normal conditions, evidencing soil recovery.
Resumo:
Many of the most interesting questions ecologists ask lead to analyses of spatial data. Yet, perhaps confused by the large number of statistical models and fitting methods available, many ecologists seem to believe this is best left to specialists. Here, we describe the issues that need consideration when analysing spatial data and illustrate these using simulation studies. Our comparative analysis involves using methods including generalized least squares, spatial filters, wavelet revised models, conditional autoregressive models and generalized additive mixed models to estimate regression coefficients from synthetic but realistic data sets, including some which violate standard regression assumptions. We assess the performance of each method using two measures and using statistical error rates for model selection. Methods that performed well included generalized least squares family of models and a Bayesian implementation of the conditional auto-regressive model. Ordinary least squares also performed adequately in the absence of model selection, but had poorly controlled Type I error rates and so did not show the improvements in performance under model selection when using the above methods. Removing large-scale spatial trends in the response led to poor performance. These are empirical results; hence extrapolation of these findings to other situations should be performed cautiously. Nevertheless, our simulation-based approach provides much stronger evidence for comparative analysis than assessments based on single or small numbers of data sets, and should be considered a necessary foundation for statements of this type in future.
Resumo:
Outgoing radiation is introduced in the framework of the classical predictive electrodynamics using LorentzDiracs equation as a subsidiary condition. In a perturbative scheme in the charges the first radiative self-terms of the accelerations, momentum and angular momentum of a two charge system without external field are calculated.
Resumo:
Introduction: Therapeutic drug monitoring (TDM) aims at optimizing treatment by individualizing dosage regimen based on measurement of blood concentrations. Maintaining concentrations within a target range requires pharmacokinetic and clinical capabilities. Bayesian calculation represents a gold standard in TDM approach but requires computing assistance. In the last decades computer programs have been developed to assist clinicians in this assignment. The aim of this benchmarking was to assess and compare computer tools designed to support TDM clinical activities.¦Method: Literature and Internet search was performed to identify software. All programs were tested on common personal computer. Each program was scored against a standardized grid covering pharmacokinetic relevance, user-friendliness, computing aspects, interfacing, and storage. A weighting factor was applied to each criterion of the grid to consider its relative importance. To assess the robustness of the software, six representative clinical vignettes were also processed through all of them.¦Results: 12 software tools were identified, tested and ranked. It represents a comprehensive review of the available software's characteristics. Numbers of drugs handled vary widely and 8 programs offer the ability to the user to add its own drug model. 10 computer programs are able to compute Bayesian dosage adaptation based on a blood concentration (a posteriori adjustment) while 9 are also able to suggest a priori dosage regimen (prior to any blood concentration measurement), based on individual patient covariates, such as age, gender, weight. Among those applying Bayesian analysis, one uses the non-parametric approach. The top 2 software emerging from this benchmark are MwPharm and TCIWorks. Other programs evaluated have also a good potential but are less sophisticated (e.g. in terms of storage or report generation) or less user-friendly.¦Conclusion: Whereas 2 integrated programs are at the top of the ranked listed, such complex tools would possibly not fit all institutions, and each software tool must be regarded with respect to individual needs of hospitals or clinicians. Interest in computing tool to support therapeutic monitoring is still growing. Although developers put efforts into it the last years, there is still room for improvement, especially in terms of institutional information system interfacing, user-friendliness, capacity of data storage and report generation.
Resumo:
We develop a real option model of the irreversible native grassland conversion decision. Upon plowing, native grassland can be followed by either a permanent cropping system or a system in which land is put under cropping (respectively, grazing) whenever crop prices are high (respectively, low). Switching costs are incurred upon alternating between cropping and grazing. The effects of risk intervention in the form of crop insurance subsidies are studied, as are the effects of cropping innovations that reduce switching costs. We calibrate the model by using cropping return data for South Central North Dakota from 1989 to 2012. Simulations show that a risk intervention that offsets 20% of a cropping return shortfall increases the sod-busting cost threshold, below which native sod will be busted, by 41% (or $43.7/acre). Omitting cropping return risk across time underestimates this sod-busting cost threshold by 23% (or $24.35/acre), and hence underestimates the native sod conversion caused by crop production.
Resumo:
Objectives: Therapeutic drug monitoring (TDM) aims at optimizing treatment by individualizing dosage regimen based on blood concentrations measurement. Maintaining concentrations within a target range requires pharmacokinetic (PK) and clinical capabilities. Bayesian calculation represents a gold standard in TDM approach but requires computing assistance. The aim of this benchmarking was to assess and compare computer tools designed to support TDM clinical activities.¦Methods: Literature and Internet were searched to identify software. Each program was scored against a standardized grid covering pharmacokinetic relevance, user-friendliness, computing aspects, interfacing, and storage. A weighting factor was applied to each criterion of the grid to consider its relative importance. To assess the robustness of the software, six representative clinical vignettes were also processed through all of them.¦Results: 12 software tools were identified, tested and ranked. It represents a comprehensive review of the available software characteristics. Numbers of drugs handled vary from 2 to more than 180, and integration of different population types is available for some programs. Nevertheless, 8 programs offer the ability to add new drug models based on population PK data. 10 computer tools incorporate Bayesian computation to predict dosage regimen (individual parameters are calculated based on population PK models). All of them are able to compute Bayesian a posteriori dosage adaptation based on a blood concentration while 9 are also able to suggest a priori dosage regimen, only based on individual patient covariates. Among those applying Bayesian analysis, MM-USC*PACK uses a non-parametric approach. The top 2 programs emerging from this benchmark are MwPharm and TCIWorks. Others programs evaluated have also a good potential but are less sophisticated or less user-friendly.¦Conclusions: Whereas 2 software packages are ranked at the top of the list, such complex tools would possibly not fit all institutions, and each program must be regarded with respect to individual needs of hospitals or clinicians. Programs should be easy and fast for routine activities, including for non-experienced users. Although interest in TDM tools is growing and efforts were put into it in the last years, there is still room for improvement, especially in terms of institutional information system interfacing, user-friendliness, capability of data storage and automated report generation.
Resumo:
The temporal dynamics of species diversity are shaped by variations in the rates of speciation and extinction, and there is a long history of inferring these rates using first and last appearances of taxa in the fossil record. Understanding diversity dynamics critically depends on unbiased estimates of the unobserved times of speciation and extinction for all lineages, but the inference of these parameters is challenging due to the complex nature of the available data. Here, we present a new probabilistic framework to jointly estimate species-specific times of speciation and extinction and the rates of the underlying birth-death process based on the fossil record. The rates are allowed to vary through time independently of each other, and the probability of preservation and sampling is explicitly incorporated in the model to estimate the true lifespan of each lineage. We implement a Bayesian algorithm to assess the presence of rate shifts by exploring alternative diversification models. Tests on a range of simulated data sets reveal the accuracy and robustness of our approach against violations of the underlying assumptions and various degrees of data incompleteness. Finally, we demonstrate the application of our method with the diversification of the mammal family Rhinocerotidae and reveal a complex history of repeated and independent temporal shifts of both speciation and extinction rates, leading to the expansion and subsequent decline of the group. The estimated parameters of the birth-death process implemented here are directly comparable with those obtained from dated molecular phylogenies. Thus, our model represents a step towards integrating phylogenetic and fossil information to infer macroevolutionary processes.
Resumo:
We present MBIS (Multivariate Bayesian Image Segmentation tool), a clustering tool based on the mixture of multivariate normal distributions model. MBIS supports multichannel bias field correction based on a B-spline model. A second methodological novelty is the inclusion of graph-cuts optimization for the stationary anisotropic hidden Markov random field model. Along with MBIS, we release an evaluation framework that contains three different experiments on multi-site data. We first validate the accuracy of segmentation and the estimated bias field for each channel. MBIS outperforms a widely used segmentation tool in a cross-comparison evaluation. The second experiment demonstrates the robustness of results on atlas-free segmentation of two image sets from scan-rescan protocols on 21 healthy subjects. Multivariate segmentation is more replicable than the monospectral counterpart on T1-weighted images. Finally, we provide a third experiment to illustrate how MBIS can be used in a large-scale study of tissue volume change with increasing age in 584 healthy subjects. This last result is meaningful as multivariate segmentation performs robustly without the need for prior knowledge.
Resumo:
Forensic science is generally defined as the application of science to address questions related to the law. Too often, this view restricts the contribution of science to one single process which eventually aims at bringing individuals to court while minimising risk of miscarriage of justice. In order to go beyond this paradigm, we propose to refocus the attention towards traces themselves, as remnants of a criminal activity, and their information content. We postulate that traces contribute effectively to a wide variety of other informational processes that support decision making inmany situations. In particular, they inform actors of new policing strategies who place the treatment of information and intelligence at the centre of their systems. This contribution of forensic science to these security oriented models is still not well identified and captured. In order to create the best condition for the development of forensic intelligence, we suggest a framework that connects forensic science to intelligence-led policing (part I). Crime scene attendance and processing can be envisaged within this view. This approach gives indications abouthowto structure knowledge used by crime scene examiners in their effective practice (part II).
Resumo:
ABSTRACT : A firm's competitive advantage can arise from internal resources as well as from an interfirm network. -This dissertation investigates the competitive advantage of a firm involved in an innovation network by integrating strategic management theory and social network theory. It develops theory and provides empirical evidence that illustrates how a networked firm enables the network value and appropriates this value in an optimal way according to its strategic purpose. The four inter-related essays in this dissertation provide a framework that sheds light on the extraction of value from an innovation network by managing and designing the network in a proactive manner. The first essay reviews research in social network theory and knowledge transfer management, and identifies the crucial factors of innovation network configuration for a firm's learning performance or innovation output. The findings suggest that network structure, network relationship, and network position all impact on a firm's performance. Although the previous literature indicates that there are disagreements about the impact of dense or spare structure, as well as strong or weak ties, case evidence from Chinese software companies reveals that dense and strong connections with partners are positively associated with firms' performance. The second essay is a theoretical essay that illustrates the limitations of social network theory for explaining the source of network value and offers a new theoretical model that applies resource-based view to network environments. It suggests that network configurations, such as network structure, network relationship and network position, can be considered important network resources. In addition, this essay introduces the concept of network capability, and suggests that four types of network capabilities play an important role in unlocking the potential value of network resources and determining the distribution of network rents between partners. This essay also highlights the contingent effects of network capability on a firm's innovation output, and explains how the different impacts of network capability depend on a firm's strategic choices. This new theoretical model has been pre-tested with a case study of China software industry, which enhances the internal validity of this theory. The third essay addresses the questions of what impact network capability has on firm innovation performance and what are the antecedent factors of network capability. This essay employs a structural equation modelling methodology that uses a sample of 211 Chinese Hi-tech firms. It develops a measurement of network capability and reveals that networked firms deal with cooperation between, and coordination with partners on different levels according to their levels of network capability. The empirical results also suggests that IT maturity, the openness of culture, management system involved, and experience with network activities are antecedents of network capabilities. Furthermore, the two-group analysis of the role of international partner(s) shows that when there is a culture and norm gap between foreign partners, a firm must mobilize more resources and effort to improve its performance with respect to its innovation network. The fourth essay addresses the way in which network capabilities influence firm innovation performance. By using hierarchical multiple regression with data from Chinese Hi-tech firms, the findings suggest that there is a significant partial mediating effect of knowledge transfer on the relationships between network capabilities and innovation performance. The findings also reveal that the impacts of network capabilities divert with the environment and strategic decision the firm has made: exploration or exploitation. Network constructing capability provides a greater positive impact on and yields more contributions to innovation performance than does network operating capability in an exploration network. Network operating capability is more important than network constructing capability for innovative firms in an exploitation network. Therefore, these findings highlight that the firm can shape the innovation network proactively for better benefits, but when it does so, it should adjust its focus and change its efforts in accordance with its innovation purposes or strategic orientation.
Resumo:
As a thorough aggregation of probability and graph theory, Bayesian networks currently enjoy widespread interest as a means for studying factors that affect the coherent evaluation of scientific evidence in forensic science. Paper I of this series of papers intends to contribute to the discussion of Bayesian networks as a framework that is helpful for both illustrating and implementing statistical procedures that are commonly employed for the study of uncertainties (e.g. the estimation of unknown quantities). While the respective statistical procedures are widely described in literature, the primary aim of this paper is to offer an essentially non-technical introduction on how interested readers may use these analytical approaches - with the help of Bayesian networks - for processing their own forensic science data. Attention is mainly drawn to the structure and underlying rationale of a series of basic and context-independent network fragments that users may incorporate as building blocs while constructing larger inference models. As an example of how this may be done, the proposed concepts will be used in a second paper (Part II) for specifying graphical probability networks whose purpose is to assist forensic scientists in the evaluation of scientific evidence encountered in the context of forensic document examination (i.e. results of the analysis of black toners present on printed or copied documents).
Resumo:
BACKGROUND: Although methicillin-susceptible Staphylococcus aureus (MSSA) native bone and joint infection (BJI) constitutes the more frequent clinical entity of BJI, prognostic studies mostly focused on methicillin-resistant S. aureus prosthetic joint infection. We aimed to assess the determinants of native MSSA BJI outcomes. METHODS: Retrospective cohort study (2001-2011) of patients admitted in a reference hospital centre for native MSSA BJI. Treatment failure determinants were assessed using Kaplan-Meier curves and binary logistic regression. RESULTS: Sixty-six patients (42 males [63.6%]; median age 61.2 years; interquartile range [IQR] 45.9-71.9) presented an acute (n = 38; 57.6%) or chronic (n = 28; 42.4%) native MSSA arthritis (n = 15; 22.7%), osteomyelitis (n = 19; 28.8%) or spondylodiscitis (n = 32; 48.5%), considered as "difficult-to-treat" in 61 cases (92.4%). All received a prolonged (27.1 weeks; IQR, 16.9-36.1) combined antimicrobial therapy, after surgical management in 37 cases (56.1%). Sixteen treatment failures (24.2%) were observed during a median follow-up period of 63.3 weeks (IQR, 44.7-103.1), including 13 persisting infections, 1 relapse after treatment disruption, and 2 super-infections. Independent determinants of treatment failure were the existence of a sinus tract (odds ratio [OR], 5.300; 95% confidence interval [CI], 1.166-24.103) and a prolonged delay to infectious disease specialist referral (OR, 1.134; 95% CI 1.013-1.271). CONCLUSIONS: The important treatment failure rate pinpointed the difficulty of cure encountered in complicated native MSSA BJI. An early infectious disease specialist referral is essential, especially in debilitated patients or in presence of sinus tract.
Resumo:
ABSTRACT: BACKGROUND: Decision curve analysis has been introduced as a method to evaluate prediction models in terms of their clinical consequences if used for a binary classification of subjects into a group who should and into a group who should not be treated. The key concept for this type of evaluation is the "net benefit", a concept borrowed from utility theory. METHODS: We recall the foundations of decision curve analysis and discuss some new aspects. First, we stress the formal distinction between the net benefit for the treated and for the untreated and define the concept of the "overall net benefit". Next, we revisit the important distinction between the concept of accuracy, as typically assessed using the Youden index and a receiver operating characteristic (ROC) analysis, and the concept of utility of a prediction model, as assessed using decision curve analysis. Finally, we provide an explicit implementation of decision curve analysis to be applied in the context of case-control studies. RESULTS: We show that the overall net benefit, which combines the net benefit for the treated and the untreated, is a natural alternative to the benefit achieved by a model, being invariant with respect to the coding of the outcome, and conveying a more comprehensive picture of the situation. Further, within the framework of decision curve analysis, we illustrate the important difference between the accuracy and the utility of a model, demonstrating how poor an accurate model may be in terms of its net benefit. Eventually, we expose that the application of decision curve analysis to case-control studies, where an accurate estimate of the true prevalence of a disease cannot be obtained from the data, is achieved with a few modifications to the original calculation procedure. CONCLUSIONS: We present several interrelated extensions to decision curve analysis that will both facilitate its interpretation and broaden its potential area of application.