94 resultados para Single-process Models


Relevância:

30.00% 30.00%

Publicador:

Resumo:

1. Identifying those areas suitable for recolonization by threatened species is essential to support efficient conservation policies. Habitat suitability models (HSM) predict species' potential distributions, but the quality of their predictions should be carefully assessed when the species-environment equilibrium assumption is violated.2. We studied the Eurasian otter Lutra lutra, whose numbers are recovering in southern Italy. To produce widely applicable results, we chose standard HSM procedures and looked for the models' capacities in predicting the suitability of a recolonization area. We used two fieldwork datasets: presence-only data, used in the Ecological Niche Factor Analyses (ENFA), and presence-absence data, used in a Generalized Linear Model (GLM). In addition to cross-validation, we independently evaluated the models with data from a recolonization event, providing presences on a previously unoccupied river.3. Three of the models successfully predicted the suitability of the recolonization area, but the GLM built with data before the recolonization disagreed with these predictions, missing the recolonized river's suitability and badly describing the otter's niche. Our results highlighted three points of relevance to modelling practices: (1) absences may prevent the models from correctly identifying areas suitable for a species spread; (2) the selection of variables may lead to randomness in the predictions; and (3) the Area Under Curve (AUC), a commonly used validation index, was not well suited to the evaluation of model quality, whereas the Boyce Index (CBI), based on presence data only, better highlighted the models' fit to the recolonization observations.4. For species with unstable spatial distributions, presence-only models may work better than presence-absence methods in making reliable predictions of suitable areas for expansion. An iterative modelling process, using new occurrences from each step of the species spread, may also help in progressively reducing errors.5. Synthesis and applications. Conservation plans depend on reliable models of the species' suitable habitats. In non-equilibrium situations, such as the case for threatened or invasive species, models could be affected negatively by the inclusion of absence data when predicting the areas of potential expansion. Presence-only methods will here provide a better basis for productive conservation management practices.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Because of the increase in workplace automation and the diversification of industrial processes, workplaces have become more and more complex. The classical approaches used to address workplace hazard concerns, such as checklists or sequence models, are, therefore, of limited use in such complex systems. Moreover, because of the multifaceted nature of workplaces, the use of single-oriented methods, such as AEA (man oriented), FMEA (system oriented), or HAZOP (process oriented), is not satisfactory. The use of a dynamic modeling approach in order to allow multiple-oriented analyses may constitute an alternative to overcome this limitation. The qualitative modeling aspects of the MORM (man-machine occupational risk modeling) model are discussed in this article. The model, realized on an object-oriented Petri net tool (CO-OPN), has been developed to simulate and analyze industrial processes in an OH&S perspective. The industrial process is modeled as a set of interconnected subnets (state spaces), which describe its constitutive machines. Process-related factors are introduced, in an explicit way, through machine interconnections and flow properties. While man-machine interactions are modeled as triggering events for the state spaces of the machines, the CREAM cognitive behavior model is used in order to establish the relevant triggering events. In the CO-OPN formalism, the model is expressed as a set of interconnected CO-OPN objects defined over data types expressing the measure attached to the flow of entities transiting through the machines. Constraints on the measures assigned to these entities are used to determine the state changes in each machine. Interconnecting machines implies the composition of such flow and consequently the interconnection of the measure constraints. This is reflected by the construction of constraint enrichment hierarchies, which can be used for simulation and analysis optimization in a clear mathematical framework. The use of Petri nets to perform multiple-oriented analysis opens perspectives in the field of industrial risk management. It may significantly reduce the duration of the assessment process. But, most of all, it opens perspectives in the field of risk comparisons and integrated risk management. Moreover, because of the generic nature of the model and tool used, the same concepts and patterns may be used to model a wide range of systems and application fields.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Traditionally, the common reserving methods used by the non-life actuaries are based on the assumption that future claims are going to behave in the same way as they did in the past. There are two main sources of variability in the processus of development of the claims: the variability of the speed with which the claims are settled and the variability between the severity of the claims from different accident years. High changes in these processes will generate distortions in the estimation of the claims reserves. The main objective of this thesis is to provide an indicator which firstly identifies and quantifies these two influences and secondly to determine which model is adequate for a specific situation. Two stochastic models were analysed and the predictive distributions of the future claims were obtained. The main advantage of the stochastic models is that they provide measures of variability of the reserves estimates. The first model (PDM) combines one conjugate family Dirichlet - Multinomial with the Poisson distribution. The second model (NBDM) improves the first one by combining two conjugate families Poisson -Gamma (for distribution of the ultimate amounts) and Dirichlet Multinomial (for distribution of the incremental claims payments). It was found that the second model allows to find the speed variability in the reporting process and development of the claims severity as function of two above mentioned distributions' parameters. These are the shape parameter of the Gamma distribution and the Dirichlet parameter. Depending on the relation between them we can decide on the adequacy of the claims reserve estimation method. The parameters have been estimated by the Methods of Moments and Maximum Likelihood. The results were tested using chosen simulation data and then using real data originating from the three lines of business: Property/Casualty, General Liability, and Accident Insurance. These data include different developments and specificities. The outcome of the thesis shows that when the Dirichlet parameter is greater than the shape parameter of the Gamma, resulting in a model with positive correlation between the past and future claims payments, suggests the Chain-Ladder method as appropriate for the claims reserve estimation. In terms of claims reserves, if the cumulated payments are high the positive correlation will imply high expectations for the future payments resulting in high claims reserves estimates. The negative correlation appears when the Dirichlet parameter is lower than the shape parameter of the Gamma, meaning low expected future payments for the same high observed cumulated payments. This corresponds to the situation when claims are reported rapidly and fewer claims remain expected subsequently. The extreme case appears in the situation when all claims are reported at the same time leading to expectations for the future payments of zero or equal to the aggregated amount of the ultimate paid claims. For this latter case, the Chain-Ladder is not recommended.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Evidence-based (EBP) aims for a new distribution of power centered on scientific evidence rather than clinical expertise. The present article describes the operational process of EBP by describing the implementation stages of this type of practise. This stage presentation is essential given that there are many conceptions end models of EBP and that some nurses have a limited knowledge of its rules ans implications. Given that number and formulation of the stages varies by author, the process presented here attempts to integrate the different stages reviewed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The epithelial Na+ channel (ENaC) belongs to a new class of channel proteins called the ENaC/DEG superfamily involved in epithelial Na+ transport, mechanotransduction, and neurotransmission. The role of ENaC in Na+ homeostasis and in the control of blood pressure has been demonstrated recently by the identification of mutations in ENaC beta and gamma subunits causing hypertension. The function of ENaC in Na+ reabsorption depends critically on its ability to discriminate between Na+ and other ions like K+ or Ca2+. ENaC is virtually impermeant to K+ ions, and the molecular basis for its high ionic selectivity is largely unknown. We have identified a conserved Ser residue in the second transmembrane domain of the ENaC alpha subunit (alphaS589), which when mutated allows larger ions such as K+, Rb+, Cs+, and divalent cations to pass through the channel. The relative ion permeability of each of the alphaS589 mutants is related inversely to the ionic radius of the permeant ion, indicating that alphaS589 mutations increase the molecular cutoff of the channel by modifying the pore geometry at the selectivity filter. Proper geometry of the pore is required to tightly accommodate Na+ and Li+ ions and to exclude larger cations. We provide evidence that ENaC discriminates between cations mainly on the basis of their size and the energy of dehydration.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aim Conservation strategies are in need of predictions that capture spatial community composition and structure. Currently, the methods used to generate these predictions generally focus on deterministic processes and omit important stochastic processes and other unexplained variation in model outputs. Here we test a novel approach of community models that accounts for this variation and determine how well it reproduces observed properties of alpine butterfly communities. Location The western Swiss Alps. Methods We propose a new approach to process probabilistic predictions derived from stacked species distribution models (S-SDMs) in order to predict and assess the uncertainty in the predictions of community properties. We test the utility of our novel approach against a traditional threshold-based approach. We used mountain butterfly communities spanning a large elevation gradient as a case study and evaluated the ability of our approach to model species richness and phylogenetic diversity of communities. Results S-SDMs reproduced the observed decrease in phylogenetic diversity and species richness with elevation, syndromes of environmental filtering. The prediction accuracy of community properties vary along environmental gradient: variability in predictions of species richness was higher at low elevation, while it was lower for phylogenetic diversity. Our approach allowed mapping the variability in species richness and phylogenetic diversity projections. Main conclusion Using our probabilistic approach to process species distribution models outputs to reconstruct communities furnishes an improved picture of the range of possible assemblage realisations under similar environmental conditions given stochastic processes and help inform manager of the uncertainty in the modelling results

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The pace of on-going climate change calls for reliable plant biodiversity scenarios. Traditional dynamic vegetation models use plant functional types that are summarized to such an extent that they become meaningless for biodiversity scenarios. Hybrid dynamic vegetation models of intermediate complexity (hybrid-DVMs) have recently been developed to address this issue. These models, at the crossroads between phenomenological and process-based models, are able to involve an intermediate number of well-chosen plant functional groups (PFGs). The challenge is to build meaningful PFGs that are representative of plant biodiversity, and consistent with the parameters and processes of hybrid-DVMs. Here, we propose and test a framework based on few selected traits to define a limited number of PFGs, which are both representative of the diversity (functional and taxonomic) of the flora in the Ecrins National Park, and adapted to hybrid-DVMs. This new classification scheme, together with recent advances in vegetation modeling, constitutes a step forward for mechanistic biodiversity modeling.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study details a method to statistically determine, on a millisecond scale and for individual subjects, those brain areas whose activity differs between experimental conditions, using single-trial scalp-recorded EEG data. To do this, we non-invasively estimated local field potentials (LFPs) using the ELECTRA distributed inverse solution and applied non-parametric statistical tests at each brain voxel and for each time point. This yields a spatio-temporal activation pattern of differential brain responses. The method is illustrated here in the analysis of auditory-somatosensory (AS) multisensory interactions in four subjects. Differential multisensory responses were temporally and spatially consistent across individuals, with onset at approximately 50 ms and superposition within areas of the posterior superior temporal cortex that have traditionally been considered auditory in their function. The close agreement of these results with previous investigations of AS multisensory interactions suggests that the present approach constitutes a reliable method for studying multisensory processing with the temporal and spatial resolution required to elucidate several existing questions in this field. In particular, the present analyses permit a more direct comparison between human and animal studies of multisensory interactions and can be extended to examine correlation between electrophysiological phenomena and behavior.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Radioactive soil-contamination mapping and risk assessment is a vital issue for decision makers. Traditional approaches for mapping the spatial concentration of radionuclides employ various regression-based models, which usually provide a single-value prediction realization accompanied (in some cases) by estimation error. Such approaches do not provide the capability for rigorous uncertainty quantification or probabilistic mapping. Machine learning is a recent and fast-developing approach based on learning patterns and information from data. Artificial neural networks for prediction mapping have been especially powerful in combination with spatial statistics. A data-driven approach provides the opportunity to integrate additional relevant information about spatial phenomena into a prediction model for more accurate spatial estimates and associated uncertainty. Machine-learning algorithms can also be used for a wider spectrum of problems than before: classification, probability density estimation, and so forth. Stochastic simulations are used to model spatial variability and uncertainty. Unlike regression models, they provide multiple realizations of a particular spatial pattern that allow uncertainty and risk quantification. This paper reviews the most recent methods of spatial data analysis, prediction, and risk mapping, based on machine learning and stochastic simulations in comparison with more traditional regression models. The radioactive fallout from the Chernobyl Nuclear Power Plant accident is used to illustrate the application of the models for prediction and classification problems. This fallout is a unique case study that provides the challenging task of analyzing huge amounts of data ('hard' direct measurements, as well as supplementary information and expert estimates) and solving particular decision-oriented problems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In a recent paper, Traulsen and Nowak use a multilevel selection model to show that cooperation can be favored by group selection in finite populations [Traulsen A, Nowak M (2006) Proc Natl Acad Sci USA 103:10952-10955]. The authors challenge the view that kin selection may be an appropriate interpretation of their results and state that group selection is a distinctive process "that permeates evolutionary processes from the emergence of the first cells to eusociality and the economics of nations." In this paper, we start by addressing Traulsen and Nowak's challenge and demonstrate that all their results can be obtained by an application of kin selection theory. We then extend Traulsen and Nowak's model to life history conditions that have been previously studied. This allows us to highlight the differences and similarities between Traulsen and Nowak's model and typical kin selection models and also to broaden the scope of their results. Our retrospective analyses of Traulsen and Nowak's model illustrate that it is possible to convert group selection models to kin selection models without disturbing the mathematics describing the net effect of selection on cooperation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Chemokines are small chemotactic molecules widely expressed throughout the central nervous system. A number of papers, during the past few years, have suggested that they have physiological functions in addition to their roles in neuroinflammatory diseases. In this context, the best evidence concerns the CXC-chemokine stromal cell-derived factor (SDF-1alpha or CXCL12) and its receptor CXCR4, whose signalling cascade is also implicated in the glutamate release process from astrocytes. Recently, astrocytic synaptic like microvesicles (SLMVs) that express vesicular glutamate transporters (VGLUTs) and are able to release glutamate by Ca(2+)-dependent regulated exocytosis, have been described both in tissue and in cultured astrocytes. Here, in order to elucidate whether SDF-1alpha/CXCR4 system can participate to the brain fast communication systems, we investigated whether the activation of CXCR4 receptor triggers glutamate exocytosis in astrocytes. By using total internal reflection (TIRF) microscopy and the membrane-fluorescent styryl dye FM4-64, we adapted an imaging methodology recently developed to measure exocytosis and recycling in synaptic terminals, and monitored the CXCR4-mediated exocytosis of SLMVs in astrocytes. We analyzed the co-localization of VGLUT with the FM dye at single-vesicle level, and observed the kinetics of the FM dye release during single fusion events. We found that the activation of CXCR4 receptors triggered a burst of exocytosis on a millisecond time scale that involved the release of Ca(2+) from internal stores. These results support the idea that astrocytes can respond to external stimuli and communicate with the neighboring cells via fast release of glutamate.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Single amino acid substitution is the type of protein alteration most related to human diseases. Current studies seek primarily to distinguish neutral mutations from harmful ones. Very few methods offer an explanation of the final prediction result in terms of the probable structural or functional effect on the protein. In this study, we describe the use of three novel parameters to identify experimentally-verified critical residues of the TP53 protein (p53). The first two parameters make use of a surface clustering method to calculate the protein surface area of highly conserved regions or regions with high nonlocal atomic interaction energy (ANOLEA) score. These parameters help identify important functional regions on the surface of a protein. The last parameter involves the use of a new method for pseudobinding free-energy estimation to specifically probe the importance of residue side-chains to the stability of protein fold. A decision tree was designed to optimally combine these three parameters. The result was compared to the functional data stored in the International Agency for Research on Cancer (IARC) TP53 mutation database. The final prediction achieved a prediction accuracy of 70% and a Matthews correlation coefficient of 0.45. It also showed a high specificity of 91.8%. Mutations in the 85 correctly identified important residues represented 81.7% of the total mutations recorded in the database. In addition, the method was able to correctly assign a probable functional or structural role to the residues. Such information could be critical for the interpretation and prediction of the effect of missense mutations, as it not only provided the fundamental explanation of the observed effect, but also helped design the most appropriate laboratory experiment to verify the prediction results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Depth-averaged velocities and unit discharges within a 30 km reach of one of the world's largest rivers, the Rio Parana, Argentina, were simulated using three hydrodynamic models with different process representations: a reduced complexity (RC) model that neglects most of the physics governing fluid flow, a two-dimensional model based on the shallow water equations, and a three-dimensional model based on the Reynolds-averaged Navier-Stokes equations. Row characteristics simulated using all three models were compared with data obtained by acoustic Doppler current profiler surveys at four cross sections within the study reach. This analysis demonstrates that, surprisingly, the performance of the RC model is generally equal to, and in some instances better than, that of the physics based models in terms of the statistical agreement between simulated and measured flow properties. In addition, in contrast to previous applications of RC models, the present study demonstrates that the RC model can successfully predict measured flow velocities. The strong performance of the RC model reflects, in part, the simplicity of the depth-averaged mean flow patterns within the study reach and the dominant role of channel-scale topographic features in controlling the flow dynamics. Moreover, the very low water surface slopes that typify large sand-bed rivers enable flow depths to be estimated reliably in the RC model using a simple fixed-lid planar water surface approximation. This approach overcomes a major problem encountered in the application of RC models in environments characterised by shallow flows and steep bed gradients. The RC model is four orders of magnitude faster than the physics based models when performing steady-state hydrodynamic calculations. However, the iterative nature of the RC model calculations implies a reduction in computational efficiency relative to some other RC models. A further implication of this is that, if used to simulate channel morphodynamics, the present RC model may offer only a marginal advantage in terms of computational efficiency over approaches based on the shallow water equations. These observations illustrate the trade off between model realism and efficiency that is a key consideration in RC modelling. Moreover, this outcome highlights a need to rethink the use of RC morphodynamic models in fluvial geomorphology and to move away from existing grid-based approaches, such as the popular cellular automata (CA) models, that remain essentially reductionist in nature. In the case of the world's largest sand-bed rivers, this might be achieved by implementing the RC model outlined here as one element within a hierarchical modelling framework that would enable computationally efficient simulation of the morphodynamics of large rivers over millennial time scales. (C) 2012 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND/AIMS: The present report examines a new pig model for progressive induction of high-grade stenosis, for the study of chronic myocardial ischemia and the dynamics of collateral vessel growth. METHODS: Thirty-nine Landrace pigs were instrumented with a novel experimental stent (GVD stent) in the left anterior descending coronary artery. Eight animals underwent transthoracic echocardiography at rest and under low-dose dobutamine. Seven animals were examined by nuclear PET and SPECT analysis. Epi-, mid- and endocardial fibrosis and the numbers of arterial vessels were examined by histology. RESULTS: Functional analysis showed a significant decrease in global left ventricular ejection fraction (24.5 +/- 1.6%) 3 weeks after implantation. There was a trend to increased left ventricular ejection fraction after low-dose dobutamine stress (36.0 +/- 6.6%) and a significant improvement of the impaired regional anterior wall motion. PET and SPECT imaging documented chronic hibernation. Myocardial fibrosis increased significantly in the ischemic area with a gradient from epi- to endocardial. The number of arterial vessels in the ischemic area increased and coronary angiography showed abundant collateral vessels of Rentrop class 1. CONCLUSION: The presented experimental model mimics the clinical situation of chronic myocardial ischemia secondary to 1-vessel coronary disease.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The success of combination antiretroviral therapy is limited by the evolutionary escape dynamics of HIV-1. We used Isotonic Conjunctive Bayesian Networks (I-CBNs), a class of probabilistic graphical models, to describe this process. We employed partial order constraints among viral resistance mutations, which give rise to a limited set of mutational pathways, and we modeled phenotypic drug resistance as monotonically increasing along any escape pathway. Using this model, the individualized genetic barrier (IGB) to each drug is derived as the probability of the virus not acquiring additional mutations that confer resistance. Drug-specific IGBs were combined to obtain the IGB to an entire regimen, which quantifies the virus' genetic potential for developing drug resistance under combination therapy. The IGB was tested as a predictor of therapeutic outcome using between 2,185 and 2,631 treatment change episodes of subtype B infected patients from the Swiss HIV Cohort Study Database, a large observational cohort. Using logistic regression, significant univariate predictors included most of the 18 drugs and single-drug IGBs, the IGB to the entire regimen, the expert rules-based genotypic susceptibility score (GSS), several individual mutations, and the peak viral load before treatment change. In the multivariate analysis, the only genotype-derived variables that remained significantly associated with virological success were GSS and, with 10-fold stronger association, IGB to regimen. When predicting suppression of viral load below 400 cps/ml, IGB outperformed GSS and also improved GSS-containing predictors significantly, but the difference was not significant for suppression below 50 cps/ml. Thus, the IGB to regimen is a novel data-derived predictor of treatment outcome that has potential to improve the interpretation of genotypic drug resistance tests.