930 resultados para Classification algorithms
Resumo:
This paper presents a strategy for the solution of the WDM optical networks planning. Specifically, the problem of Routing and Wavelength Allocation (RWA) in order to minimize the amount of wavelengths used. In this case, the problem is known as the Min-RWA. Two meta-heuristics (Tabu Search and Simulated Annealing) are applied to take solutions of good quality and high performance. The key point is the degradation of the maximum load on the virtual links in favor of minimization of number of wavelengths used; the objective is to find a good compromise between the metrics of virtual topology (load in Gb/s) and of the physical topology (quantity of wavelengths). The simulations suggest good results when compared to some existing in the literature.
Resumo:
This technical note develops information filter and array algorithms for a linear minimum mean square error estimator of discrete-time Markovian jump linear systems. A numerical example for a two-mode Markovian jump linear system, to show the advantage of using array algorithms to filter this class of systems, is provided.
Resumo:
The continuous growth of peer-to-peer networks has made them responsible for a considerable portion of the current Internet traffic. For this reason, improvements in P2P network resources usage are of central importance. One effective approach for addressing this issue is the deployment of locality algorithms, which allow the system to optimize the peers` selection policy for different network situations and, thus, maximize performance. To date, several locality algorithms have been proposed for use in P2P networks. However, they usually adopt heterogeneous criteria for measuring the proximity between peers, which hinders a coherent comparison between the different solutions. In this paper, we develop a thoroughly review of popular locality algorithms, based on three main characteristics: the adopted network architecture, distance metric, and resulting peer selection algorithm. As result of this study, we propose a novel and generic taxonomy for locality algorithms in peer-to-peer networks, aiming to enable a better and more coherent evaluation of any individual locality algorithm.
Resumo:
In this paper a computational implementation of an evolutionary algorithm (EA) is shown in order to tackle the problem of reconfiguring radial distribution systems. The developed module considers power quality indices such as long duration interruptions and customer process disruptions due to voltage sags, by using the Monte Carlo simulation method. Power quality costs are modeled into the mathematical problem formulation, which are added to the cost of network losses. As for the EA codification proposed, a decimal representation is used. The EA operators, namely selection, recombination and mutation, which are considered for the reconfiguration algorithm, are herein analyzed. A number of selection procedures are analyzed, namely tournament, elitism and a mixed technique using both elitism and tournament. The recombination operator was developed by considering a chromosome structure representation that maps the network branches and system radiality, and another structure that takes into account the network topology and feasibility of network operation to exchange genetic material. The topologies regarding the initial population are randomly produced so as radial configurations are produced through the Prim and Kruskal algorithms that rapidly build minimum spanning trees. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
The properties of recycled aggregate produced from mixed (masonry and concrete) construction and demolition (C&D) waste are highly variable, and this restricts the use of such aggregate in structural concrete production. The development of classification techniques capable of reducing this variability is instrumental for quality control purposes and the production of high quality C&D aggregate. This paper investigates how the classification of C&D mixed coarse aggregate according to porosity influences the mechanical performance of concrete. Concretes using a variety of C&D aggregate porosity classes and different water/cement ratios were produced and the mechanical properties measured. For concretes produced with constant volume fractions of water, cement, natural sand and coarse aggregate from recycled mixed C&D waste, the compressive strength and Young modulus are direct exponential functions of the aggregate porosity. Sink and float technique is a simple laboratory density separation tool that facilitates the separation of cement particles with lower porosity, a difficult task when done only by visual sorting. For this experiment, separation using a 2.2 kg/dmA(3) suspension produced recycled aggregate (porosity less than 17%) which yielded good performance in concrete production. Industrial gravity separators may lead to the production of high quality recycled aggregate from mixed C&D waste for structural concrete applications.
Resumo:
This paper presents a family of algorithms for approximate inference in credal networks (that is, models based on directed acyclic graphs and set-valued probabilities) that contain only binary variables. Such networks can represent incomplete or vague beliefs, lack of data, and disagreements among experts; they can also encode models based on belief functions and possibilistic measures. All algorithms for approximate inference in this paper rely on exact inferences in credal networks based on polytrees with binary variables, as these inferences have polynomial complexity. We are inspired by approximate algorithms for Bayesian networks; thus the Loopy 2U algorithm resembles Loopy Belief Propagation, while the Iterated Partial Evaluation and Structured Variational 2U algorithms are, respectively, based on Localized Partial Evaluation and variational techniques. (C) 2007 Elsevier Inc. All rights reserved.
Resumo:
The flowshop scheduling problem with blocking in-process is addressed in this paper. In this environment, there are no buffers between successive machines: therefore intermediate queues of jobs waiting in the system for their next operations are not allowed. Heuristic approaches are proposed to minimize the total tardiness criterion. A constructive heuristic that explores specific characteristics of the problem is presented. Moreover, a GRASP-based heuristic is proposed and Coupled with a path relinking strategy to search for better outcomes. Computational tests are presented and the comparisons made with an adaptation of the NEH algorithm and with a branch-and-bound algorithm indicate that the new approaches are promising. (c) 2007 Elsevier Ltd. All rights reserved.
Resumo:
Objective To describe onset features, classification and treatment of juvenile dermatomyositis (JDM) and juvenile polymyositis (JPM) from a multicentre registry. Methods Inclusion criteria were onset age lower than 18 years and a diagnosis of any idiopathic inflammatory myopathy (IIM) by attending physician. Bohan & Peter (1975) criteria categorisation was established by a scoring algorithm to define JDM and JPM based oil clinical protocol data. Results Of the 189 cases included, 178 were classified as JDM, 9 as JPM (19.8: 1) and 2 did not fit the criteria; 6.9% had features of chronic arthritis and connective tissue disease overlap. Diagnosis classification agreement occurred in 66.1%. Medial? onset age was 7 years, median follow-up duration was 3.6 years. Malignancy was described in 2 (1.1%) cases. Muscle weakness occurred in 95.8%; heliotrope rash 83.5%; Gottron plaques 83.1%; 92% had at least one abnormal muscle enzyme result. Muscle biopsy performed in 74.6% was abnormal in 91.5% and electromyogram performed in 39.2% resulted abnormal in 93.2%. Logistic regression analysis was done in 66 cases with all parameters assessed and only aldolase resulted significant, as independent variable for definite JDM (OR=5.4, 95%CI 1.2-24.4, p=0.03). Regarding treatment, 97.9% received steroids; 72% had in addition at least one: methotrexate (75.7%), hydroxychloroquine (64.7%), cyclosporine A (20.6%), IV immunoglobulin (20.6%), azathioprine (10.3%) or cyclophosphamide (9.6%). In this series 24.3% developed calcinosis and mortality rate was 4.2%. Conclusion Evaluation of predefined criteria set for a valid diagnosis indicated aldolase as the most important parameter associated with de, methotrexate combination, was the most indicated treatment.
Resumo:
When building genetic maps, it is necessary to choose from several marker ordering algorithms and criteria, and the choice is not always simple. In this study, we evaluate the efficiency of algorithms try (TRY), seriation (SER), rapid chain delineation (RCD), recombination counting and ordering (RECORD) and unidirectional growth (UG), as well as the criteria PARF (product of adjacent recombination fractions), SARF (sum of adjacent recombination fractions), SALOD (sum of adjacent LOD scores) and LHMC (likelihood through hidden Markov chains), used with the RIPPLE algorithm for error verification, in the construction of genetic linkage maps. A linkage map of a hypothetical diploid and monoecious plant species was simulated containing one linkage group and 21 markers with fixed distance of 3 cM between them. In all, 700 F(2) populations were randomly simulated with and 400 individuals with different combinations of dominant and co-dominant markers, as well as 10 and 20% of missing data. The simulations showed that, in the presence of co-dominant markers only, any combination of algorithm and criteria may be used, even for a reduced population size. In the case of a smaller proportion of dominant markers, any of the algorithms and criteria (except SALOD) investigated may be used. In the presence of high proportions of dominant markers and smaller samples (around 100), the probability of repulsion linkage increases between them and, in this case, use of the algorithms TRY and SER associated to RIPPLE with criterion LHMC would provide better results. Heredity (2009) 103, 494-502; doi:10.1038/hdy.2009.96; published online 29 July 2009
Resumo:
Oxidative stress is a physiological condition that is associated with atherosclerosis. and it can be influenced by diet. Our objective was to group fifty-seven individuals with dyslipidaemia controlled by statins according to four oxidative biomarkers, and to evaluate the diet pattern and blood biochemistry differences between these groups. Blood samples were collected and the following parameters were evaluated: diet intake; plasma fatty acids; lipoprotein concentration; glucose; oxidised LDL (oxLDL); malondialdehyde (MDA): total antioxidant activity by 2,2-diphenyl-1-picrylhydrazyl (DPPH) and ferric reducing ability power assays. Individuals were separated into five groups by cluster analysis. All groups showed a difference with respect to at least one of the four oxidative stress biomarkers. The separation of individuals in the first axis was based upon their total antioxidant activity. Clusters located on the right side showed higher total antioxidant activity, higher myristic fatty acid and lower arachidonic fatty acid proportions than clusters located on the left side. A negative correlation was observed between DPPH and the peroxidability index. The second axis showed differences in oxidation status as measured by MDA and oxLDL concentrations. Clusters located on the Upper side showed higher oxidative status and lower HDL cholesterol concentration than clusters located on the lower side. There were no differences in diet among the five clusters. Therefore, fatty acid synthesis and HDL cholesterol concentration seem to exert a more significant effect on the oxidative conditions of the individuals with dyslipidaemia controlled by statins than does their food intake.
Resumo:
The Biopharmaceutics Classification System (BCS) is a tool that was created to categorize drugs into different groups according to their solubility and permeability characteristics. Through a combination of these factors and physiological parameters, it is possible to understand the absorption behavior of a drug in the gastrointestinal tract, thus contributing to cost and time reductions in drug development, as well as reducing exposure of human subjects during in vivo trials. Solubility is attained by determining the equilibrium under conditions of physiological pH, while different methods may be employed for evaluating permeability. On the other hand, the intrinsic dissolution rate (IDR), which is defined as the rate of dissolution of a pure substance under constant temperature, pH, and surface area conditions, among others, may present greater correlation to the in vivo dissolution dynamic than the solubility test. The purpose of this work is to discuss the intrinsic dissolution test as a tool for determining the solubility of drugs within the scope of the Biopharmaceutics Classification System (BCS).
Resumo:
A chemotaxonomic analysis is described of a database containing various types of compounds from the Heliantheae tribe (Asteraceae) using Self-Organizing Maps (SOM). The numbers of occurrences of 9 chemical classes in different taxa of the tribe were used as variables. The study shows that SOM applied to chemical data can contribute to differentiate genera, subtribes, and groups of subtribes (subtribe branches), as well as to tribal and subtribal classifications of Heliantheae, exhibiting a high hit percentage comparable to that of an expert performance, and in agreement with the previous tribe classification proposed by Stuessy.
Resumo:
Recently, we have built a classification model that is capable of assigning a given sesquiterpene lactone (STL) into exactly one tribe of the plant family Asteraceae from which the STL has been isolated. Although many plant species are able to biosynthesize a set of peculiar compounds, the occurrence of the same secondary metabolites in more than one tribe of Asteraceae is frequent. Building on our previous work, in this paper, we explore the possibility of assigning an STL to more than one tribe (class) simultaneously. When an object may belong to more than one class simultaneously, it is called multilabeled. In this work, we present a general overview of the techniques available to examine multilabeled data. The problem of evaluating the performance of a multilabeled classifier is discussed. Two particular multilabeled classification methods-cross-training with support vector machines (ct-SVM) and multilabeled k-nearest neighbors (M-L-kNN)were applied to the classification of the STLs into seven tribes from the plant family Asteraceae. The results are compared to a single-label classification and are analyzed from a chemotaxonomic point of view. The multilabeled approach allowed us to (1) model the reality as closely as possible, (2) improve our understanding of the relationship between the secondary metabolite profiles of different Asteraceae tribes, and (3) significantly decrease the number of plant sources to be considered for finding a certain STL. The presented classification models are useful for the targeted collection of plants with the objective of finding plant sources of natural compounds that are biologically active or possess other specific properties of interest.
Resumo:
Developing a unified classification system to replace four of the systems currently used in disability athletics (i.e., track and field) has been widely advocated. The diverse impairments to be included in a unified system require severed assessment methods, results of which cannot be meaningfully compared. Therefore, the taxonomic basis of current classification systems is invalid in a unified system. Biomechanical analysis establishes that force, a vector described in terms of magnitude and direction, is a key determinant of success in all athletic disciplines. It is posited that all impairments to be included in a unified system may be classified as either force magnitude impairments (FMI) or force control impairments (FCI). This framework would provide a valid taxonomic basis for a unified system, creating the opportunity to decrease the number of classes and enhance the viability of disability athletics.