833 resultados para Probabilistic methodology
Resumo:
Patterns of species interactions affect the dynamics of food webs. An important component of species interactions that is rarely considered with respect to food webs is the strengths of interactions, which may affect both structure and dynamics. In natural systems, these strengths are variable, and can be quantified as probability distributions. We examined how variation in strengths of interactions can be described hierarchically, and how this variation impacts the structure of species interactions in predator-prey networks, both of which are important components of ecological food webs. The stable isotope ratios of predator and prey species may be particularly useful for quantifying this variability, and we show how these data can be used to build probabilistic predator-prey networks. Moreover, the distribution of variation in strengths among interactions can be estimated from a limited number of observations. This distribution informs network structure, especially the key role of dietary specialization, which may be useful for predicting structural properties in systems that are difficult to observe. Finally, using three mammalian predator-prey networks ( two African and one Canadian) quantified from stable isotope data, we show that exclusion of link-strength variability results in biased estimates of nestedness and modularity within food webs, whereas the inclusion of body size constraints only marginally increases the predictive accuracy of the isotope-based network. We find that modularity is the consequence of strong link-strengths in both African systems, while nestedness is not significantly present in any of the three predator-prey networks.
Resumo:
Extracorporeal treatments (ECTRs), such as hemodialysis and hemoperfusion, are used in poisoning despite a lack of controlled human trials demonstrating efficacy. To provide uniform recommendations, the EXTRIP group was formed as an international collaboration among recognized experts from nephrology, clinical toxicology, critical care, or pharmacology and supported by over 30 professional societies. For every poison, the clinical benefit of ECTR is weighed against associated complications, alternative therapies, and costs. Rigorous methodology, using the AGREE instrument, was developed and ratified. Methods rely on evidence appraisal and, in the absence of robust studies, on a thorough and transparent process of consensus statements. Twenty-four poisons were chosen according to their frequency, available evidence, and relevance. A systematic literature search was performed in order to retrieve all original publications regardless of language. Data were extracted on a standardized instrument. Quality of the evidence was assessed by GRADE as: High = A, Moderate = B, Low = C, Very Low = D. For every poison, dialyzability was assessed and clinical effect of ECTR summarized. All pertinent documents were submitted to the workgroup with a list of statements for vote (general statement, indications, timing, ECTR choice). A modified Delphi method with two voting rounds was used, between which deliberation was required. Each statement was voted on a Likert scale (1-9) to establish the strength of recommendation. This approach will permit the production of the first important practice guidelines on this topic.
Resumo:
PURPOSE: To report the results of a Latin American consensus panel regarding the diagnosis and management of primary open-angle glaucoma and to compare these results with those from a similar panel in the United States. DESIGN: A RAND-like (Research and Development) appropriateness methodology was used to assess glaucoma practice in Latin America. METHODS: The 148 polling statements created for the RAND-like analysis in the United States and 10 additional statements specific to glaucoma care in Latin America were presented to a panel of Latin American glaucoma experts. Panelists were polled in private using the RAND-like methodology before and after the panel meeting. RESULTS: Consensus agreement or disagreement among Latin American experts was reached for 51.3% of statements before the meeting and increased to 66.5% in the private, anonymous meeting after polling (79.0% agreement, 21.0% disagreement). Although there was a high degree of concordance (111 of 148 statements; 75%) between the results of this Latin American panel and the United States panel, there were some notable exceptions relating to diagnostic and therapeutic decision making. CONCLUSIONS: This RAND-like consensus methodology provides a perspective of how Latin American glaucoma practitioners view many aspects of glaucoma and compares these results with those obtained using a similar methodology from practitioners in the United States. These findings may be helpful to ophthalmologists providing glaucoma care in Latin America and in other regions of the world. (Am J Ophthalmol 2012;154: 460-465. (C) 2012 by Elsevier Inc. All rights reserved.)
Resumo:
Organizational intelligence can be seen as a function of the viable structure of an organization. With the integration of the Viable System Model and Soft Systems Methodology (systemic approaches of organizational management) focused on the role of the intelligence function, it is possible to elaborate a model of action with a structured methodology to prospect, select, treat and distribute information to the entire organization that improves the efficacy and efficiency of all processes. This combination of methodologies is called Intelligence Systems Methodology (ISM) whose assumptions and dynamics are delimited in this paper. The ISM is composed of two simultaneous activities: the Active Environmental Mapping and the Stimulated Action Cycle. The elaboration of the formal ISM description opens opportunities for applications of the methodology on real situations, offering a new path for this specific issue of systems thinking: the intelligence systems. Knowledge Management Research & Practice (2012) 10, 141-152. doi:10.1057/kmrp.2011.44
Resumo:
This paper addresses the numerical solution of random crack propagation problems using the coupling boundary element method (BEM) and reliability algorithms. Crack propagation phenomenon is efficiently modelled using BEM, due to its mesh reduction features. The BEM model is based on the dual BEM formulation, in which singular and hyper-singular integral equations are adopted to construct the system of algebraic equations. Two reliability algorithms are coupled with BEM model. The first is the well known response surface method, in which local, adaptive polynomial approximations of the mechanical response are constructed in search of the design point. Different experiment designs and adaptive schemes are considered. The alternative approach direct coupling, in which the limit state function remains implicit and its gradients are calculated directly from the numerical mechanical response, is also considered. The performance of both coupling methods is compared in application to some crack propagation problems. The investigation shows that direct coupling scheme converged for all problems studied, irrespective of the problem nonlinearity. The computational cost of direct coupling has shown to be a fraction of the cost of response surface solutions, regardless of experiment design or adaptive scheme considered. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
With the increasing emphasis on health and well-being, nutrition aspects need to be incorporated as a dimension of product development. Thus, the production of a high-fibre content snack food from a mixture of corn and flaxseed flours was optimized by response surface methodology. The independent variables considered in this study were: feed moisture, process temperature and flaxseed flour addition, as they were found to significantly impact the resultant product. These variables were studied according to a rotatable composite design matrix (-1.68, -1, 0, 1, 1.68). Response variable was the expansion ratio since it has been highly correlated with acceptability. The optimum corn-flaxseed snack obtained presented a sevenfold increase in dietary fibre, almost 100% increase in protein content compared to the pure corn snack, and yielded an acceptability score of 6.93. This acceptability score was similar to those observed for corn snack brands in the market, indicating the potential commercial use of this new product, which can help to increase the daily consumption of dietary fibre.
Resumo:
Response surface methodology (RSM), based on a 2(2) full factorial design, evaluated the moisture effects in recovering xylose by diethyloxalate (DEO) hydrolysis. Experiments were carried out in laboratory reactors (10 mL glass ampoules) containing corn stover (0.5 g) properly ground. The ampoules were kept at 160 degrees C for 90 min.(-) Both DEO concentration and corn stover moisture content were statistically significant at 99% confidence level. The maximum xylose recovery by the response surface methodology was achieved employing both DEO concentration and corn stover moisture at near their highest levels area. We amplified this area by using an overlay plot as a graphical optimization using a response of xylose recovery more than 80%. The mathematical statistical model was validated by testing a specific condition in the satisfied overlay plot area. Experimentally, a maximum xylose recovery (81.2%) was achieved by using initial corn stover moisture of 60% and a DEO concentration of 4% w/w. The mathematical statistical model showed that xylose recovery increases during DEO corn stover acid hydrolysis as the corn stover moisture level increases. This observation could be important during the harvesting of corn before it is fully dried in the field. The corn stover moisture was an important variable to improve xylose recovery by DEO acid hydrolysis. (c) 2011 Elsevier Ltd. All rights reserved.
Resumo:
This paper presents a new parallel methodology for calculating the determinant of matrices of the order n, with computational complexity O(n), using the Gauss-Jordan Elimination Method and Chio's Rule as references. We intend to present our step-by-step methodology using clear mathematical language, where we will demonstrate how to calculate the determinant of a matrix of the order n in an analytical format. We will also present a computational model with one sequential algorithm and one parallel algorithm using a pseudo-code.
Resumo:
Fraud is a global problem that has required more attention due to an accentuated expansion of modern technology and communication. When statistical techniques are used to detect fraud, whether a fraud detection model is accurate enough in order to provide correct classification of the case as a fraudulent or legitimate is a critical factor. In this context, the concept of bootstrap aggregating (bagging) arises. The basic idea is to generate multiple classifiers by obtaining the predicted values from the adjusted models to several replicated datasets and then combining them into a single predictive classification in order to improve the classification accuracy. In this paper, for the first time, we aim to present a pioneer study of the performance of the discrete and continuous k-dependence probabilistic networks within the context of bagging predictors classification. Via a large simulation study and various real datasets, we discovered that the probabilistic networks are a strong modeling option with high predictive capacity and with a high increment using the bagging procedure when compared to traditional techniques. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
Abstract Background One goal of gene expression profiling is to identify signature genes that robustly distinguish different types or grades of tumors. Several tumor classifiers based on expression profiling have been proposed using microarray technique. Due to important differences in the probabilistic models of microarray and SAGE technologies, it is important to develop suitable techniques to select specific genes from SAGE measurements. Results A new framework to select specific genes that distinguish different biological states based on the analysis of SAGE data is proposed. The new framework applies the bolstered error for the identification of strong genes that separate the biological states in a feature space defined by the gene expression of a training set. Credibility intervals defined from a probabilistic model of SAGE measurements are used to identify the genes that distinguish the different states with more reliability among all gene groups selected by the strong genes method. A score taking into account the credibility and the bolstered error values in order to rank the groups of considered genes is proposed. Results obtained using SAGE data from gliomas are presented, thus corroborating the introduced methodology. Conclusion The model representing counting data, such as SAGE, provides additional statistical information that allows a more robust analysis. The additional statistical information provided by the probabilistic model is incorporated in the methodology described in the paper. The introduced method is suitable to identify signature genes that lead to a good separation of the biological states using SAGE and may be adapted for other counting methods such as Massive Parallel Signature Sequencing (MPSS) or the recent Sequencing-By-Synthesis (SBS) technique. Some of such genes identified by the proposed method may be useful to generate classifiers.
Resumo:
Structural durability is an important criterion that must be evaluated for every type of structure. Concerning reinforced concrete members, chloride diffusion process is widely used to evaluate durability, especially when these structures are constructed in aggressive atmospheres. The chloride ingress triggers the corrosion of reinforcements; therefore, by modelling this phenomenon, the corrosion process can be better evaluated as well as the structural durability. The corrosion begins when a threshold level of chloride concentration is reached at the steel bars of reinforcements. Despite the robustness of several models proposed in literature, deterministic approaches fail to predict accurately the corrosion time initiation due the inherent randomness observed in this process. In this regard, structural durability can be more realistically represented using probabilistic approaches. This paper addresses the analyses of probabilistic corrosion time initiation in reinforced concrete structures exposed to chloride penetration. The chloride penetration is modelled using the Fick's diffusion law. This law simulates the chloride diffusion process considering time-dependent effects. The probability of failure is calculated using Monte Carlo simulation and the first order reliability method, with a direct coupling approach. Some examples are considered in order to study these phenomena. Moreover, a simplified method is proposed to determine optimal values for concrete cover.
Resumo:
Soils of a large tropical area with differentiated landscapes cannot be treated uniformly for ecological applications. We intend to develop a framework based on physiography that can be used in regional applications. The study region occupies more than 1.1 million km² and is located at the junction of the savanna region of Central Brazil and the Amazon forest. It includes a portion of the high sedimentary Central Brazil plateau and large areas of mostly peneplained crystalline shield on the border of the wide inner-Amazon low sedimentary plain. A first broad subdivision was made into landscape regions followed by a more detailed subdivision into soil regions. Mapping information was extracted from soil survey maps at scales of 1:250000-1:500000. Soil units were integrated within a homogenized legend using a set of selected attributes such as taxonomic term, the texture of the B horizon and the associated vegetation. For each region, a detailed inventory of the soil units with their area distribution was elaborated. Ten landscape regions and twenty-four soil regions were recognized and delineated. Soil cover of a region is normally characterized by a cluster composed of many soil units. Soil diversity is comparable in the landscape and the soil regions. Composition of the soil cover is quantitatively expressed in terms of area extension of the soil units. Such geographic divisions characterized by grouping soil units and their spatial estimates must be used for regional ecological applications.
Resumo:
Abstract This paper describes a design methodology for piezoelectric energy harvester s that thinly encapsulate the mechanical devices and expl oit resonances from higher- order vibrational modes. The direction of polarization determines the sign of the pi ezoelectric tensor to avoid cancellations of electric fields from opposite polarizations in the same circuit. The resultant modified equations of state are solved by finite element method (FEM). Com- bining this method with the solid isotropic material with penalization (SIMP) method for piezoelectric material, we have developed an optimization methodology that optimizes the piezoelectric material layout and polarization direc- tion. Updating the density function of the SIMP method is performed based on sensitivity analysis, the sequen- tial linear programming on the early stage of the opti- mization, and the phase field method on the latter stage
Resumo:
Semi-qualitative probabilistic networks (SQPNs) merge two important graphical model formalisms: Bayesian networks and qualitative probabilistic networks. They provade a very Complexity of inferences in polytree-shaped semi-qualitative probabilistic networks and qualitative probabilistic networks. They provide a very general modeling framework by allowing the combination of numeric and qualitative assessments over a discrete domain, and can be compactly encoded by exploiting the same factorization of joint probability distributions that are behind the bayesian networks. This paper explores the computational complexity of semi-qualitative probabilistic networks, and takes the polytree-shaped networks as its main target. We show that the inference problem is coNP-Complete for binary polytrees with multiple observed nodes. We also show that interferences can be performed in time linear in the number of nodes if there is a single observed node. Because our proof is construtive, we obtain an efficient linear time algorithm for SQPNs under such assumptions. To the best of our knowledge, this is the first exact polynominal-time algorithm for SQPn. Together these results provide a clear picture of the inferential complexity in polytree-shaped SQPNs.
Resumo:
Due to the growing interest in social networks, link prediction has received significant attention. Link prediction is mostly based on graph-based features, with some recent approaches focusing on domain semantics. We propose algorithms for link prediction that use a probabilistic ontology to enhance the analysis of the domain and the unavoidable uncertainty in the task (the ontology is specified in the probabilistic description logic crALC). The scalability of the approach is investigated, through a combination of semantic assumptions and graph-based features. We evaluate empirically our proposal, and compare it with standard solutions in the literature.