899 resultados para information bottleneck method
Resumo:
The general objective of this study was to evaluate the ordered weighted averaging (OWA) method, integrated to a geographic information systems (GIS), in the definition of priority areas for forest conservation in a Brazilian river basin, aiming at to increase the regional biodiversity. We demonstrated how one could obtain a range of alternatives by applying OWA, including the one obtained by the weighted linear combination method and, also the use of the analytic hierarchy process (AHP) to structure the decision problem and to assign the importance to each criterion. The criteria considered important to this study were: proximity to forest patches; proximity among forest patches with larger core area; proximity to surface water; distance from roads: distance from urban areas; and vulnerability to erosion. OWA requires two sets of criteria weights: the weights of relative criterion importance and the order weights. Thus, Participatory Technique was used to define the criteria set and the criterion importance (based in AHP). In order to obtain the second set of weights we considered the influence of each criterion, as well as the importance of each one, on this decision-making process. The sensitivity analysis indicated coherence among the criterion importance weights, the order weights, and the solution. According to this analysis, only the proximity to surface water criterion is not important to identify priority areas for forest conservation. Finally, we can highlight that the OWA method is flexible, easy to be implemented and, mainly, it facilitates a better understanding of the alternative land-use suitability patterns. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
Xanthomonas axonopodis pv. passiflorae causes bacterial spot in passion fruit. It attacks the purple and yellow passion fruit as well as the sweet passion fruit. The diversity of 87 isolates of pv. passiflorae collected from across 22 fruit orchards in Brazil was evaluated using molecular profiles and statistical procedures, including an unweighted pair-group method with arithmetical averages-based dendrogram, analysis of molecular variance (AMOVA), and an assigning test that provides information on genetic structure at the population level. Isolates from another eight pathovars were included in the molecular analyses and all were shown to have a distinct repetitive sequence-based polymerase chain reaction profile. Amplified fragment length polymorphism technique revealed considerable diversity among isolates of pv. passiflorae, and AMOVA showed that most of the variance (49.4%) was due to differences between localities. Cluster analysis revealed that most genotypic clusters were homogeneous and that variance was associated primarily with geographic origin. The disease adversely affects fruit production and may kill infected plants. A method for rapid diagnosis of the pathogen, even before the disease symptoms become evident, has value for producers. Here, a set of primers (Xapas) was designed by exploiting a single-nucleotide polymorphism between the sequences of the intergenic 16S-23S rRNA spacer region of the pathovars. Xapas was shown to effectively detect all pv. passiflorae isolates and is recommended for disease diagnosis in passion fruit orchards.
Resumo:
Market-based transmission expansion planning gives information to investors on where is the most cost efficient place to invest and brings benefits to those who invest in this grid. However, both market issue and power system adequacy problems are system planers’ concern. In this paper, a hybrid probabilistic criterion of Expected Economical Loss (EEL) is proposed as an index to evaluate the systems’ overall expected economical losses during system operation in a competitive market. It stands on both investors’ and planner’s point of view and will further improves the traditional reliability cost. By applying EEL, it is possible for system planners to obtain a clear idea regarding the transmission network’s bottleneck and the amount of losses arises from this weak point. Sequentially, it enables planners to assess the worth of providing reliable services. Also, the EEL will contain valuable information for moneymen to undertake their investment. This index could truly reflect the random behaviors of power systems and uncertainties from electricity market. The performance of the EEL index is enhanced by applying Normalized Coefficient of Probability (NCP), so it can be utilized in large real power systems. A numerical example is carried out on IEEE Reliability Test System (RTS), which will show how the EEL can predict the current system bottleneck under future operational conditions and how to use EEL as one of planning objectives to determine future optimal plans. A well-known simulation method, Monte Carlo simulation, is employed to achieve the probabilistic characteristic of electricity market and Genetic Algorithms (GAs) is used as a multi-objective optimization tool.
Resumo:
The Direct Simulation Monte Carlo (DSMC) method is used to simulate the flow of rarefied gases. In the Macroscopic Chemistry Method (MCM) for DSMC, chemical reaction rates calculated from local macroscopic flow properties are enforced in each cell. Unlike the standard total collision energy (TCE) chemistry model for DSMC, the new method is not restricted to an Arrhenius form of the reaction rate coefficient, nor is it restricted to a collision cross-section which yields a simple power-law viscosity. For reaction rates of interest in aerospace applications, chemically reacting collisions are generally infrequent events and, as such, local equilibrium conditions are established before a significant number of chemical reactions occur. Hence, the reaction rates which have been used in MCM have been calculated from the reaction rate data which are expected to be correct only for conditions of thermal equilibrium. Here we consider artificially high reaction rates so that the fraction of reacting collisions is not small and propose a simple method of estimating the rates of chemical reactions which can be used in the Macroscopic Chemistry Method in both equilibrium and non-equilibrium conditions. Two tests are presented: (1) The dissociation rates under conditions of thermal non-equilibrium are determined from a zero-dimensional Monte-Carlo sampling procedure which simulates ‘intra-modal’ non-equilibrium; that is, equilibrium distributions in each of the translational, rotational and vibrational modes but with different temperatures for each mode; (2) The 2-D hypersonic flow of molecular oxygen over a vertical plate at Mach 30 is calculated. In both cases the new method produces results in close agreement with those given by the standard TCE model in the same highly nonequilibrium conditions. We conclude that the general method of estimating the non-equilibrium reaction rate is a simple means by which information contained within non-equilibrium distribution functions predicted by the DSMC method can be included in the Macroscopic Chemistry Method.
Resumo:
Common sense tells us that the future is an essential element in any strategy. In addition, there is a good deal of literature on scenario planning, which is an important tool in considering the future in terms of strategy. However, in many organizations there is serious resistance to the development of scenarios, and they are not broadly implemented by companies. But even organizations that do not rely heavily on the development of scenarios do, in fact, construct visions to guide their strategies. But it might be asked, what happens when this vision is not consistent with the future? To address this problem, the present article proposes a method for checking the content and consistency of an organization`s vision of the future, no matter how it was conceived. The proposed method is grounded on theoretical concepts from the field of future studies, which are described in this article. This study was motivated by the search for developing new ways of improving and using scenario techniques as a method for making strategic decisions. The method was then tested on a company in the field of information technology in order to check its operational feasibility. The test showed that the proposed method is, in fact, operationally feasible and was capable of analyzing the vision of the company being studied, indicating both its shortcomings and points of inconsistency. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
The general objective of this work was to study the contribution of the ERP for the quality of the managerial accounting information, through the perception of managers of large sized Brazilian companies. The initial principle was that, presently, we live in an enterprise reality characterized by global and competitive worldwide scenery where the information about the enterprise performance and the evaluation of the intangible assets are necessary conditions for the survival, of the companies. The research of the exploratory type is based on a sample of 37 managers of large sized-Brazilian companies. The analysis of the data treated by means of the qualitative method showed that the great majority of the companies of the sample (86%) possess an ERP implanted. It also showed that this system is used in combination with other applicative software. The managers, in its majority, were also satisfied with the information generated in relation to the dimensions Time and Content. However, with regard to the qualitative nature of the information, the ERP made some analysis possible when the Balanced Scorecard was adopted, but information able to provide an estimate of the investments carried through in the intangible assets was not obtained. These results Suggest that in these companies ERP systems are not adequate to support strategic decisions.
Resumo:
The data of nitrogen adsorption on pillared clays (PILC) are converted to comparison plots (t-plots) to derive their pore size distribution (PSD). As in the MP method, the surface area of a group of pores having similar pore sizes is calculated from the slopes of tangent lines at two succeeding points on a comparison plot. By the modified MP method in this work, the tangent line is extrapolated to the adsorption axis on the t-plot, and the difference between intercepts is used to obtain the volume of the group of pores. From the information of surface area and pore volume, the average width of the pore group can be calculated and hence the PSDs of PILCs are obtained by carrying out such calculation procedures from high to low t. With this method, PSDs of several pillared clays are calculated over a wide pore size range, from micropores to mesopores. It is found that the modified MP method could result in the underestimation of the width of ultramicropores due to the enhancement in adsorption energy in these pores. Nevertheless, the method can be very useful in calculating the surface area and pore volume, as well as a mean width of these pores. For super-micropores and mesopores, pore size can also be underestimated, due to deviation of the pore shape from a slit. The principles of the improved MP method, as well as problems associated with it are thoroughly discussed in this paper. In general, this modified method provides practically meaningful results which are consistent with the pore dimension obtained from powder X-ray diffraction measurements, but involves no complicated theoretical treatment or assumptions.
Resumo:
Recent studies have demonstrated that spatial patterns of fMRI BOLD activity distribution over the brain may be used to classify different groups or mental states. These studies are based on the application of advanced pattern recognition approaches and multivariate statistical classifiers. Most published articles in this field are focused on improving the accuracy rates and many approaches have been proposed to accomplish this task. Nevertheless, a point inherent to most machine learning methods (and still relatively unexplored in neuroimaging) is how the discriminative information can be used to characterize groups and their differences. In this work, we introduce the Maximum Uncertainty Linear Discrimination Analysis (MLDA) and show how it can be applied to infer groups` patterns by discriminant hyperplane navigation. In addition, we show that it naturally defines a behavioral score, i.e., an index quantifying the distance between the states of a subject from predefined groups. We validate and illustrate this approach using a motor block design fMRI experiment data with 35 subjects. (C) 2008 Elsevier Inc. All rights reserved.
Resumo:
In this paper, we propose a method based on association rule-mining to enhance the diagnosis of medical images (mammograms). It combines low-level features automatically extracted from images and high-level knowledge from specialists to search for patterns. Our method analyzes medical images and automatically generates suggestions of diagnoses employing mining of association rules. The suggestions of diagnosis are used to accelerate the image analysis performed by specialists as well as to provide them an alternative to work on. The proposed method uses two new algorithms, PreSAGe and HiCARe. The PreSAGe algorithm combines, in a single step, feature selection and discretization, and reduces the mining complexity. Experiments performed on PreSAGe show that this algorithm is highly suitable to perform feature selection and discretization in medical images. HiCARe is a new associative classifier. The HiCARe algorithm has an important property that makes it unique: it assigns multiple keywords per image to suggest a diagnosis with high values of accuracy. Our method was applied to real datasets, and the results show high sensitivity (up to 95%) and accuracy (up to 92%), allowing us to claim that the use of association rules is a powerful means to assist in the diagnosing task.
Resumo:
This book chapter represents a synthesis of the work which started in my PhD and which has been the conceptual basis for all of my research since 1993. The chapter presents a method for scientists and managers to use for selecting the type of remotely sensed data to use to meet their information needs associated with a mapping, monitoring or modelling application. The work draws on results from several of my ARC projects, CRC Rainforest and Coastal projects and theses of P.Scarth , K.Joyce and C.Roelfsema.
Resumo:
In population pharmacokinetic studies, the precision of parameter estimates is dependent on the population design. Methods based on the Fisher information matrix have been developed and extended to population studies to evaluate and optimize designs. In this paper we propose simple programming tools to evaluate population pharmacokinetic designs. This involved the development of an expression for the Fisher information matrix for nonlinear mixed-effects models, including estimation of the variance of the residual error. We implemented this expression as a generic function for two software applications: S-PLUS and MATLAB. The evaluation of population designs based on two pharmacokinetic examples from the literature is shown to illustrate the efficiency and the simplicity of this theoretic approach. Although no optimization method of the design is provided, these functions can be used to select and compare population designs among a large set of possible designs, avoiding a lot of simulations.
Resumo:
Understanding the genetic architecture of quantitative traits can greatly assist the design of strategies for their manipulation in plant-breeding programs. For a number of traits, genetic variation can be the result of segregation of a few major genes and many polygenes (minor genes). The joint segregation analysis (JSA) is a maximum-likelihood approach for fitting segregation models through the simultaneous use of phenotypic information from multiple generations. Our objective in this paper was to use computer simulation to quantify the power of the JSA method for testing the mixed-inheritance model for quantitative traits when it was applied to the six basic generations: both parents (P-1 and P-2), F-1, F-2, and both backcross generations (B-1 and B-2) derived from crossing the F-1 to each parent. A total of 1968 genetic model-experiment scenarios were considered in the simulation study to quantify the power of the method. Factors that interacted to influence the power of the JSA method to correctly detect genetic models were: (1) whether there were one or two major genes in combination with polygenes, (2) the heritability of the major genes and polygenes, (3) the level of dispersion of the major genes and polygenes between the two parents, and (4) the number of individuals examined in each generation (population size). The greatest levels of power were observed for the genetic models defined with simple inheritance; e.g., the power was greater than 90% for the one major gene model, regardless of the population size and major-gene heritability. Lower levels of power were observed for the genetic models with complex inheritance (major genes and polygenes), low heritability, small population sizes and a large dispersion of favourable genes among the two parents; e.g., the power was less than 5% for the two major-gene model with a heritability value of 0.3 and population sizes of 100 individuals. The JSA methodology was then applied to a previously studied sorghum data-set to investigate the genetic control of the putative drought resistance-trait osmotic adjustment in three crosses. The previous study concluded that there were two major genes segregating for osmotic adjustment in the three crosses. Application of the JSA method resulted in a change in the proposed genetic model. The presence of the two major genes was confirmed with the addition of an unspecified number of polygenes.
Resumo:
Much progress has been made on inferring population history from molecular data. However, complex demographic scenarios have been considered rarely or have proved intractable. The serial introduction of the South-Central American cane Load Bufo marinas in various Caribbean and Pacific islands involves four major phases: a possible genetic admixture during the first introduction, a bottleneck associated with founding, a transitory, population boom, and finally, a demographic stabilization. A large amount of historical and demographic information is available for those introductions and can be combined profitably with molecular data. We used a Bayesian approach to combine this information With microsatellite (10 loci) and enzyme (22 loci) data and used a rejection algorithm to simultaneously estimate the demographic parameters describing the four major phases of the introduction history,. The general historical trends supported by microsatellites and enzymes were similar. However, there was a stronger support for a larger bottleneck at introductions for microsatellites than enzymes and for a more balanced genetic admixture for enzymes than for microsatellites. Verb, little information was obtained from either marker about the transitory population boom observed after each introduction. Possible explanations for differences in resolution of demographic events and discrepancies between results obtained with microsatellites and enzymes were explored. Limits Of Our model and method for the analysis of nonequilibrium populations were discussed.
Resumo:
The aim of this study was to develop and trial a method to monitor the evolution of clinical reasoning in a PBL curriculum that is suitable for use in a large medical school. Termed Clinical Reasoning Problems (CRPs), it is based on the notion that clinical reasoning is dependent on the identification and correct interpretation of certain critical clinical features. Each problem consists of a clinical scenario comprising presentation, history and physical examination. Based on this information, subjects are asked to nominate the two most likely diagnoses and to list the clinical features that they considered in formulating their diagnoses, indicating whether these features supported or opposed the nominated diagnoses. Students at different levels of medical training completed a set of 10 CRPs as well as the Diagnostic Thinking Inventory, a self-reporting questionnaire designed to assess reasoning style. Responses were scored against those of a reference group of general practitioners. Results indicate that the CRPs are an easily administered, reliable and valid assessment of clinical reasoning, able to successfully monitor its development throughout medical training. Consequently, they can be employed to assess clinical reasoning skill in individual students and to evaluate the success of undergraduate medical schools in providing effective tuition in clinical reasoning.
Resumo:
We describe a novel approach to explore DNA nucleotide sequence data, aiming to produce high-level categorical and structural information about the underlying chromosomes, genomes and species. The article starts by analyzing chromosomal data through histograms using fixed length DNA sequences. After creating the DNA-related histograms, a correlation between pairs of histograms is computed, producing a global correlation matrix. These data are then used as input to several data processing methods for information extraction and tabular/graphical output generation. A set of 18 species is processed and the extensive results reveal that the proposed method is able to generate significant and diversified outputs, in good accordance with current scientific knowledge in domains such as genomics and phylogenetics.